Address
Egnatia 154 - Thessaloniki 54636
Phone
+30 2313 098 159
In this article, we will attempt a thorough analysis of the ways in which we can make substantive arguments against AI-first practice, drawing on information, ideas and data drawn from the wider literature.
The Artificial Intelligence has been one of the most ambitious and exciting areas of technology for several years. The fascination of the ability of a computer system to 'learn' from data, to formulate decisions or even to solve complex problems seems endless. Many experts believe that AI (Artificial Intelligence) innovation can truly revolutionise research, business and every aspect of our lives. Added to this is the continuous improvement of computing power, the most advanced AI algorithms and the expansive willingness of both academia and business to invest in systems that promise machine intelligence.
However, despite the fact that the development of artificial intelligence is progressing rapidly, AI research is not the only form of research we should be concerned about. Today, it is becoming increasingly popular to take the famous "AI-first" approach to research, where almost every problem is approached starting from an AI model or system machine learning, as if it were a one-way street for any scientific or technological challenge. Although this trend initially seems positive, due to the enthusiasm for the AI applications which improve processes and produce useful results, also carries serious risks. Excessive faith in a single technology - even one as promising as AI - can lead to a lack of use of other methodologies, a one-dimensional view of scientific innovation and, in some cases, a deterioration of critical thinking.
In this article, we will attempt an in-depth analysis of the ways in which we can make substantive arguments against AI-first practice, drawing on information, ideas and data drawn from the wider literature, as well as from the source article in Smashing Magazine (you can see the full English-language text at the link at the end). While this material contains multiple useful perspectives, it is in no way intended to demonize AI. Rather, the aim is to clarify that AI's monopolization of scientific discourse and business strategy needs careful evaluation, critical thinking and ongoing study of alternative approaches. This is the only way to ensure a multidimensional evolution of innovation, not based on shallow fads, but on a substantial epistemological basis.
The modern concept of AI can be traced back to the mid-20th century, when leading scientists such as Alan Turing were debating whether machines could "think". In the decades that followed, research around Artificial Intelligence in research went through various phases of excitement and decline (AI winters and AI springs). With the advent of neural networks, deep learning and easy access to vast amounts of data via the internet, AI has experienced an unprecedented resurgence. Technology companies have invested huge amounts of capital in infrastructure and personnel, aiming to exploit the advantages of machine learning and advanced algorithms.
Out of this cosmogony, the concept of the "AI-first" approach emerged. Essentially, the term became widely known when software giants announced that they would transform their entire operations, platforms and services with AI in mind. Every new product, every new research, would start with the starting point of how AI can provide the solution. This has led to tremendous growth: from voice recognition systems for smartphones, to advanced recommendation engines and online sales services that leverage predictive algorithms. The momentum, indeed, has been (and remains) explosive, if we only recall recent developments in language models or image recognition applications.
However, along with this rapid growth, there were also phenomena of oversimplification: many scientific articles and research proposals started directly with an AI solution, without considering alternatives. In essence, an 'ideology' was formed that AI is the best, if not the only, scientific approach. This primacy ignored or even neglected many other methodologies that either could be better suited to specific problems or could work in complementary ways with AI. This, in the broader context of science, is a troubling monolith.
To argue against the AI-first approach, it is not enough to simply express a general distrust of AI. What is needed here is a structured, reasoned approach that recognises both the benefits and the weaknesses of AI algorithms. Let's look at some of the most basic points that can form the "arsenal" of arguments:
These points show why it may be important to be critical of the "AI-first" approach. They do not mean that we should stop funding or encouraging AI research. Rather, they highlight the need to be open to broader perspectives while maintaining a healthy variety of theoretical and practical tools.
For those who wish to present a targeted, persuasive critique of the AI-first strategy, there are specific modes of argumentation that can strengthen their argument. The scientific community will not benefit from shouting matches and dogmatic debates, but from logical and informed positions. Some practical strategies are presented below:
These strategies of rebuttal aim at not remaining at a theoretical level. They propose practical approaches that can broaden the debate and force those who embrace the exclusivity of AI to acknowledge its limits and risks.
When a new technology emerges promising radical change, it is only human and expected that the scientific community gets carried away with enthusiasm. All the more so when we see tangible results from AI applications in areas such as health (e.g. accurate tumour detection), industry, transport (autonomous vehicles) and many others. The much-touted idea that AI can optimise almost anything further entices investors and researchers.
However, science does not flourish from just one idea. It develops mainly through constant dialogue, experimentation, questioning and synthesis of different theories and methods. Η AI-first approach risks crowding out other scientific disciplines or, at the very least, assuming that the solution to every problem is to load more computing power and more data. This can lead to a generalized perception that, since we have "big data" and AI, we don't need anything else.
This is a huge mistake. A deeper understanding of a phenomenon often results from the synergy of different sciences. In the study of climate change, for example, AI is valuable for analysing satellite data. However, without climatology, geophysics, biology and their respective disciplines, we cannot interpret the results or propose sustainable solutions. The same applies to medicine, where AI-based diagnostic systems need to be supported by physicians, biologists, psychologists and many other professionals who have a deeper understanding of human systems.
In addition, the ethical AI and the debate on the social implications requires the contribution of lawyers, philosophers, sociologists and policy experts. An over-emphasis on technological aspects alone (e.g. how to optimise a neural network) can obscure the wider implications. Pluralism in science ensures that the solution or method will be evaluated from many angles - scientific, social, ethical, economic.
Even in computing itself, AI is but one discipline. There are many other areas, such as computational theory, databases, system architectures, cybersecurity, etc., that are equally driving innovation. If we overlook these areas, we may find ourselves with very "smart" AI applications, but with inadequate infrastructures or weak security foundations, putting at risk the data and systems on which AI itself relies.
Often, in discussions or in presentations of research proposals, the following scenario comes up: Someone claims that the solution to problem "X" is an AI model, or that, more drastically, we no longer need classical methods because the AI-first approach is enough. How can we position ourselves in a way that inspires respect and trust?
This step-by-step guide shows a method that is not only defensive; it attempts a productive fermentation. In many cases, AI-first advocates can perceive gaps or weaknesses in their approach through a calm, explanatory critique.
AI research is not a closed field that only concerns programmers and data scientists. It has direct social, ethical and political implications. In a society that is constantly being digitized, where our every move can be recorded, the indiscriminate use of AI algorithms can jeopardise privacy, human rights and social justice.
In short, to endlessly encourage AI-first thinking without proper control is tantamount to putting much of social organization into "black boxes" that may not operate fairly, nor with full transparency.
Despite the criticism of the AI-first approach, no one disputes that the future of artificial intelligence it promises to be exciting. The ability of computer systems to process volumes of data beyond human capabilities and to discover complex correlations is undoubtedly bringing revolutionary changes. From predicting epidemics to creating personalised treatments, AI has endless potential applications that could radically improve the quality of life.
At the same time, enthusiasm must coexist with realism. If we really want AI to be a force for good, then the research behind it cannot be one-dimensional. It must be polyphonic, controlled, collaborative, regulated. Data scientists, analysts, software engineers will benefit most when they collaborate with biologists, sociologists, economists, lawyers, ethics experts. Only in this way will balanced development be achieved.
Moreover, as AI becomes more and more integrated into our daily lives (voice assistants, recommendations on entertainment platforms, smart homes, autonomous vehicles, etc.), the need for education and understanding by the general public is of paramount importance. Humanity must have basic digital literacy and be able to critically evaluate the functioning of AI systems. Otherwise, it will find itself at a disadvantage, where the 'elite' of experts and technology companies will unilaterally determine the course of events.
In this context, criticism and questioning of AI-first ideology is not an obstacle. On the contrary, they act as checks and balances that ultimately improve the quality of AI applications and shield society from failures or abuses. Dialogue, documentation and healthy disagreement are essential to continue research in Artificial Intelligence to flourish, without leading to dead ends.
The "AI-first" approach may sound like an innovative, quick fix for many of today's challenges, but by taking a one-dimensional approach, we miss the essence of science, which is research diversity, critical thinking and interdisciplinary collaboration. The ability of AI to identify patterns and produce useful results is not in doubt, but whether this ability alone is sufficient to solve all issues is questionable.
The balance between the development of artificial intelligence and other methodologies is the surest way to sustainable scientific and technological development. AI offers tools, but it should not become the only "key" to all "locked mysteries". We need classical research, theoretical foundations, field experiments, social sciences, human creativity, understanding, and above all an ethical code for responsible use of Artificial Intelligence.
Therefore, if we want to oppose in an AI-first narrative in a scientific debate or business meeting, as long as we insist on clear and concrete arguments in the areas of transparency, explainability, diversity in research, ethical use, social implications and the possibility of alternative approaches. We are not saying 'no' to AI; we are saying 'yes, but with limits and complementarity'.
Moreover, when our audience or our colleagues hear that AI-first can have negative consequences, don't leave them with the impression that AI is not useful. Instead, we emphasize that AI is already one of the most powerful tools of the present (and, obviously, of the future). However, this power must be placed within a broader framework of scientific ethics, data quality assurance, human rights protection and the development of regulatory structures that safeguard the interests of society and science.
The "brake" we put on the uncontrolled AI-first culture is not a brake on progress; it is the buffer that makes progress properly targeted, democratic and credible. And, ultimately, this is perhaps the best way for the Artificial Intelligence in research and practical applications: through an open, critical and diverse dialogue that gives space to every innovative idea, not just algorithms.
Source: https://smashingmagazine.com/2025/03/how-to-argue-against-ai-first-research/