Address

Egnatia 154 - Thessaloniki 54636

Phone

+30 2313 098 159

Why AI is not the only solution: how to tackle the AI-first approach to research

In this article, we will attempt a thorough analysis of the ways in which we can make substantive arguments against AI-first practice, drawing on information, ideas and data drawn from the wider literature.

The Artificial Intelligence has been one of the most ambitious and exciting areas of technology for several years. The fascination of the ability of a computer system to 'learn' from data, to formulate decisions or even to solve complex problems seems endless. Many experts believe that AI (Artificial Intelligence) innovation can truly revolutionise research, business and every aspect of our lives. Added to this is the continuous improvement of computing power, the most advanced AI algorithms and the expansive willingness of both academia and business to invest in systems that promise machine intelligence.

However, despite the fact that the development of artificial intelligence is progressing rapidly, AI research is not the only form of research we should be concerned about. Today, it is becoming increasingly popular to take the famous "AI-first" approach to research, where almost every problem is approached starting from an AI model or system machine learning, as if it were a one-way street for any scientific or technological challenge. Although this trend initially seems positive, due to the enthusiasm for the AI applications which improve processes and produce useful results, also carries serious risks. Excessive faith in a single technology - even one as promising as AI - can lead to a lack of use of other methodologies, a one-dimensional view of scientific innovation and, in some cases, a deterioration of critical thinking.

In this article, we will attempt an in-depth analysis of the ways in which we can make substantive arguments against AI-first practice, drawing on information, ideas and data drawn from the wider literature, as well as from the source article in Smashing Magazine (you can see the full English-language text at the link at the end). While this material contains multiple useful perspectives, it is in no way intended to demonize AI. Rather, the aim is to clarify that AI's monopolization of scientific discourse and business strategy needs careful evaluation, critical thinking and ongoing study of alternative approaches. This is the only way to ensure a multidimensional evolution of innovation, not based on shallow fads, but on a substantial epistemological basis.

Historical review of AI and the roots of the "AI-first" strategy

The modern concept of AI can be traced back to the mid-20th century, when leading scientists such as Alan Turing were debating whether machines could "think". In the decades that followed, research around Artificial Intelligence in research went through various phases of excitement and decline (AI winters and AI springs). With the advent of neural networks, deep learning and easy access to vast amounts of data via the internet, AI has experienced an unprecedented resurgence. Technology companies have invested huge amounts of capital in infrastructure and personnel, aiming to exploit the advantages of machine learning and advanced algorithms.

Out of this cosmogony, the concept of the "AI-first" approach emerged. Essentially, the term became widely known when software giants announced that they would transform their entire operations, platforms and services with AI in mind. Every new product, every new research, would start with the starting point of how AI can provide the solution. This has led to tremendous growth: from voice recognition systems for smartphones, to advanced recommendation engines and online sales services that leverage predictive algorithms. The momentum, indeed, has been (and remains) explosive, if we only recall recent developments in language models or image recognition applications.

However, along with this rapid growth, there were also phenomena of oversimplification: many scientific articles and research proposals started directly with an AI solution, without considering alternatives. In essence, an 'ideology' was formed that AI is the best, if not the only, scientific approach. This primacy ignored or even neglected many other methodologies that either could be better suited to specific problems or could work in complementary ways with AI. This, in the broader context of science, is a troubling monolith.

The key criticisms of the AI-first approach

To argue against the AI-first approach, it is not enough to simply express a general distrust of AI. What is needed here is a structured, reasoned approach that recognises both the benefits and the weaknesses of AI algorithms. Let's look at some of the most basic points that can form the "arsenal" of arguments:

  1. One-dimensional focus: When we start with a problem and immediately look to "load" an AI system or AI algorithms as a solution, we often overlook methodologies such as classical statistical analysis, actuarial models, qualitative approaches or even more "traditional" simulation techniques. AI research methodology is powerful, but it is not a panacea.
  2. Underestimation of the human factor: Enthusiasm for the power of machine learning can lead to an underestimation of human intuition, empirical knowledge and social parameters that are often not easily captured in data sets. Just as AI innovation triumphs in pattern recognition, humans can discern details in context that are not accessible by an algorithm.
  3. AI risks and misuse: Various actors, from regulators to international academies, express fears about issues such as ethical AI, data bias, the opacity of black box models and the impact of AI on society. When we invest in a single approach (AI-first) without considering alternatives, we risk adopting systems that may perpetuate discrimination or cause social unrest.
  4. Lack of diversity in scientific research: One of the most fundamental lessons of science is that diversity and different perspectives enhance discovery. If all research funds are directed to AI projects, then other disciplines or methods are left behind. This not only reduces overall progress, but can also lead to slower, more superficial improvements in the field of AI itself, since interdisciplinary interaction is missing.
  5. Oversimplification of human thought: AI as it works today is based on mathematical models and often on statistical patterns. Although algorithms have become highly capable at specific tasks, they are still far from a "general intelligence". The monopolization of research by the AI-first approach may be creating misconceptions about what human intelligence is and how it can be represented in computational models.
  6. Explainability problems (Explainability): One of the major issues that arises in many AI applications is the difficulty of understanding its "decisions". Complex neural networks are often so over-developed that even their own creators cannot explain every stage of the process in detail, especially in real time. In scientific research, the interpretation of results is as critical as the result itself; if we cannot explain how we reached a conclusion, then the validity and scientific usefulness of the conclusion is called into question.

These points show why it may be important to be critical of the "AI-first" approach. They do not mean that we should stop funding or encouraging AI research. Rather, they highlight the need to be open to broader perspectives while maintaining a healthy variety of theoretical and practical tools.

Practical strategies to counter AI-first perception

For those who wish to present a targeted, persuasive critique of the AI-first strategy, there are specific modes of argumentation that can strengthen their argument. The scientific community will not benefit from shouting matches and dogmatic debates, but from logical and informed positions. Some practical strategies are presented below:

  1. Comparison with traditional methods: If a research problem can be equally (or better) solved by classical statistical techniques or by simulation methods, let's highlight it. It may be that a very simple algebraic solution or a time-tested model is much clearer and cheaper than an AI system that requires huge computational power.
  2. Highlighting the points of failure: Most studies around AI present the positive results (success stories), while the negative ones usually remain in the dark or do not get the same publicity. We look for and highlight examples where the algorithm "failed" because the data was incomplete or biased. These stories can illustrate in practice the AI's impact on society when due diligence is lacking.
  3. Focus on human creativity: Human creativity, intuition and empirical knowledge are virtues unique to the human mind. An AI system, even with algorithms that simulate human thought, has no real consciousness or will. When we underestimate human thinking, we risk turning even scientific research into a routine exercise based on blind faith in data.
  4. Emphasis on transparency and explanation: Criticism of the AI-first trend can be directed at the issue of explainability. Structures, methods and protocols are needed to encourage the development of "transparent" algorithms, or at least accessible to decision exploration. If the promotion of AI does not take seriously the need for interpretable results, then it is incomplete.
  5. Interdisciplinary cooperation: Many of the most ambitious AI projects are not just in computing, but touch on medicine, sociology, psychology, economics and other fields. Our critique can be aimed at the fact that AI scientists need to integrate broader disciplines into their work to ensure that models and applications meet real needs without harming or ignoring important parameters.
  6. Highlighting the regulatory framework: Institutions and laws often lag behind technological developments. Here, criticism can highlight the need to develop and implement a AI regulatory framework that protects privacy, ensures fair treatment and does not allow for the opaque use of algorithms in critical areas (such as court decisions or recruitment assessments). The lack of such a framework is a serious risk when an AI-first approach prevails everywhere.

These strategies of rebuttal aim at not remaining at a theoretical level. They propose practical approaches that can broaden the debate and force those who embrace the exclusivity of AI to acknowledge its limits and risks.

The importance of diversity in scientific research

When a new technology emerges promising radical change, it is only human and expected that the scientific community gets carried away with enthusiasm. All the more so when we see tangible results from AI applications in areas such as health (e.g. accurate tumour detection), industry, transport (autonomous vehicles) and many others. The much-touted idea that AI can optimise almost anything further entices investors and researchers.

However, science does not flourish from just one idea. It develops mainly through constant dialogue, experimentation, questioning and synthesis of different theories and methods. Η AI-first approach risks crowding out other scientific disciplines or, at the very least, assuming that the solution to every problem is to load more computing power and more data. This can lead to a generalized perception that, since we have "big data" and AI, we don't need anything else.

This is a huge mistake. A deeper understanding of a phenomenon often results from the synergy of different sciences. In the study of climate change, for example, AI is valuable for analysing satellite data. However, without climatology, geophysics, biology and their respective disciplines, we cannot interpret the results or propose sustainable solutions. The same applies to medicine, where AI-based diagnostic systems need to be supported by physicians, biologists, psychologists and many other professionals who have a deeper understanding of human systems.

In addition, the ethical AI and the debate on the social implications requires the contribution of lawyers, philosophers, sociologists and policy experts. An over-emphasis on technological aspects alone (e.g. how to optimise a neural network) can obscure the wider implications. Pluralism in science ensures that the solution or method will be evaluated from many angles - scientific, social, ethical, economic.

Even in computing itself, AI is but one discipline. There are many other areas, such as computational theory, databases, system architectures, cybersecurity, etc., that are equally driving innovation. If we overlook these areas, we may find ourselves with very "smart" AI applications, but with inadequate infrastructures or weak security foundations, putting at risk the data and systems on which AI itself relies.

Step-by-step guide to "taking on" an AI-first argument

Often, in discussions or in presentations of research proposals, the following scenario comes up: Someone claims that the solution to problem "X" is an AI model, or that, more drastically, we no longer need classical methods because the AI-first approach is enough. How can we position ourselves in a way that inspires respect and trust?

  1. Listen and understand: Before refuting, it is good to understand what the other side is proposing. What is the core of the argument? Are there already data, results, case studies?
  2. Investigate the problem: Ask what stage the problem is at. Is it exploratory, diagnostic or predictive? AI is better suited to predictive or classification tasks, but not always as effective in interpretive situations that require understandable causal relationships.
  3. Ask for alternatives: In good scientific proposals, there is always the question "Is there another way to approach it?". This helps to see if the AI-first proponent has considered other methods or is just following the "fad".
  4. Insist on transparency: Ask for an explanation of how the AI system makes decisions. If you get responses of "We don't know exactly, but it works", point out that a scientific method requires interpretations and reproducibility. This is even more critical if AI is used in areas where an error can be costly (e.g. medical diagnosis).
  5. Assess the volume and quality of data: If the approach is based on big data, then the corresponding data quality must be available. If the data is biased, incomplete or inappropriate, the model may draw incorrect conclusions. The criticism here can stand particularly strong.
  6. Give examples of failure or false positives/negatives: If you know of cases where a seemingly promising AI model has failed, tell us. Many times, bringing to light examples where the system makes glaring mistakes is stronger than theoretical questioning.
  7. Compose, don't excoriate: The most effective rebuttal is not to say "AI is useless" - which is not the case. Instead, you can acknowledge that AI has potential, but point out that it needs complementary or alternative approaches, human control, a multidisciplinary team, and so on.

This step-by-step guide shows a method that is not only defensive; it attempts a productive fermentation. In many cases, AI-first advocates can perceive gaps or weaknesses in their approach through a calm, explanatory critique.

Ethical and social parameters: Far beyond technology

AI research is not a closed field that only concerns programmers and data scientists. It has direct social, ethical and political implications. In a society that is constantly being digitized, where our every move can be recorded, the indiscriminate use of AI algorithms can jeopardise privacy, human rights and social justice.

  • Bias in the data: Machine learning models learn from available data. If it is historically biased, the algorithm risks reproducing or even reinforcing the same biases. For example, systems that evaluate biographies may unfairly bias certain social or racial groups because the historical recruitment data was already biased.
  • Freedom of will and manipulation: AI-first logic in advertising, social media platforms and Internet search can manipulate people's preferences in ways that even they don't realise. Serious questions are raised about how free our choices are when everything around us is designed to lead us in certain directions (e.g. clickbait, personalized ads).
  • Transparency in the public sector: When governments or public bodies adopt AI systems (e.g. for person identification, application processing and so on), the question arises whether the citizen knows and understands how decisions are made. An AI-first approach that does not provide for appropriate transparency controls may violate the democratic principle of accountability.
  • AI regulatory framework: Lawmakers around the world are struggling to understand and regulate new developments. In Europe, for example, efforts have been made with the General Data Protection Regulation (GDPR), and there is an ongoing debate about imposing rules specifically for AI. Critical voices draw attention to the fact that an adequate regulatory framework should not be based on oversimplifications or technophobic reactions, but on thorough approaches that take into account both AI innovation and the protection of human rights.

In short, to endlessly encourage AI-first thinking without proper control is tantamount to putting much of social organization into "black boxes" that may not operate fairly, nor with full transparency.

The future of AI: Challenges and opportunities

Despite the criticism of the AI-first approach, no one disputes that the future of artificial intelligence it promises to be exciting. The ability of computer systems to process volumes of data beyond human capabilities and to discover complex correlations is undoubtedly bringing revolutionary changes. From predicting epidemics to creating personalised treatments, AI has endless potential applications that could radically improve the quality of life.

At the same time, enthusiasm must coexist with realism. If we really want AI to be a force for good, then the research behind it cannot be one-dimensional. It must be polyphonic, controlled, collaborative, regulated. Data scientists, analysts, software engineers will benefit most when they collaborate with biologists, sociologists, economists, lawyers, ethics experts. Only in this way will balanced development be achieved.

Moreover, as AI becomes more and more integrated into our daily lives (voice assistants, recommendations on entertainment platforms, smart homes, autonomous vehicles, etc.), the need for education and understanding by the general public is of paramount importance. Humanity must have basic digital literacy and be able to critically evaluate the functioning of AI systems. Otherwise, it will find itself at a disadvantage, where the 'elite' of experts and technology companies will unilaterally determine the course of events.

In this context, criticism and questioning of AI-first ideology is not an obstacle. On the contrary, they act as checks and balances that ultimately improve the quality of AI applications and shield society from failures or abuses. Dialogue, documentation and healthy disagreement are essential to continue research in Artificial Intelligence to flourish, without leading to dead ends.

How to defend a more integrated approach

The "AI-first" approach may sound like an innovative, quick fix for many of today's challenges, but by taking a one-dimensional approach, we miss the essence of science, which is research diversity, critical thinking and interdisciplinary collaboration. The ability of AI to identify patterns and produce useful results is not in doubt, but whether this ability alone is sufficient to solve all issues is questionable.

The balance between the development of artificial intelligence and other methodologies is the surest way to sustainable scientific and technological development. AI offers tools, but it should not become the only "key" to all "locked mysteries". We need classical research, theoretical foundations, field experiments, social sciences, human creativity, understanding, and above all an ethical code for responsible use of Artificial Intelligence.

Therefore, if we want to oppose in an AI-first narrative in a scientific debate or business meeting, as long as we insist on clear and concrete arguments in the areas of transparency, explainability, diversity in research, ethical use, social implications and the possibility of alternative approaches. We are not saying 'no' to AI; we are saying 'yes, but with limits and complementarity'.

Moreover, when our audience or our colleagues hear that AI-first can have negative consequences, don't leave them with the impression that AI is not useful. Instead, we emphasize that AI is already one of the most powerful tools of the present (and, obviously, of the future). However, this power must be placed within a broader framework of scientific ethics, data quality assurance, human rights protection and the development of regulatory structures that safeguard the interests of society and science.

The "brake" we put on the uncontrolled AI-first culture is not a brake on progress; it is the buffer that makes progress properly targeted, democratic and credible. And, ultimately, this is perhaps the best way for the Artificial Intelligence in research and practical applications: through an open, critical and diverse dialogue that gives space to every innovative idea, not just algorithms.

Source: https://smashingmagazine.com/2025/03/how-to-argue-against-ai-first-research/

Newsletter

Enter your email address below to subscribe to our newsletter

Fill in the contact form to receive your offer.

20%

Discount on all our packages
Aenean leo ligulaconsequat vitae, eleifend acer neque sed ipsum. Nam quam nunc, blandit vel, tempus.