An artificial intelligence façade concealed human effort. The revelation of a start-up claiming to offer an innovative solution through artificial intelligence raises ethical and legal questions. The manipulation of investors and the excessive valuation of a company, supported by human workers rather than autonomous code, reveal the major stakes of integrity in the tech sector. A large-scale scam exposed. This scandal demonstrates that the foundations of trust in technology can be fragile.
The shocking revelation of a tech start-up
Builder.ai, a London-based start-up, was recently at the center of a scandal revealing that its artificial intelligence, named Natasha, actually relied on the work of 700 humans. These individuals, primarily Indian workers, were hired to manually code each line, receiving compensation between 8 and 15 dollars an hour. Thus, the operation of this supposed AI demanded genuine human effort, contrary to the idea of an autonomous machine capable of programming.
A complex illusion of artificial intelligence
The promise of Builder.ai was alluring: to make app development as simple as ordering a pizza. This attracted significant investments from major companies, including Microsoft and investors from Qatar. The start-up experienced a meteoric valuation, reaching 1.5 billion dollars, classifying it as a “unicorn” in the start-up world.
A misleading business model
Problems intensified in 2025 when Builder.ai was forced to acknowledge that its sales projections were largely unrealistic. One of its main creditors decided to block 37 million dollars worth of assets, triggering a sequence that led to the company’s bankruptcy. The apparent solidity of its business model turned out to be a powerful mirage.
The infiltration of human workers
The big revelation occurred on May 31 when investigations exposed that Natasha, the artificially intelligent system presented as innovative, only worked with intensive human support. Engineers used their expertise to produce code, while the start-up boasted a technology capable of generating this code automatically. A deception that casts a shadow on the supposed progress of artificial intelligence.
Consequences on the perception of AI
This situation raises questions about the trust placed in artificial intelligence systems. Users and investors are now forced to question the true capabilities and limits of emerging technologies. Are the promises genuinely fulfilled, or do they conceal an abusive reliance on human labor at the expense of technological advancements?
Effects on the tech ecosystem
The fall of Builder.ai could provoke repercussions within the start-up ecosystem. Investors might become more wary of companies making unverified automation promises. This scandal marks a grim turning point, urging each player to reassess their approach to technological development and to question the very foundations of innovation.
The role of major companies
Microsoft, one of the main investors, and Qatar, whose financial support was crucial, also find themselves embarrassed by this situation. These entities must now evaluate the rigor of their verification processes when investing in the tech field. The consequences of this case may lead them to revise their start-up evaluation strategies.
Frequently asked questions
What is Builder.ai and what was its original goal?
Builder.ai was a London-based start-up founded in 2016, promising to simplify app development using an artificial intelligence named Natasha, supposedly capable of coding automatically.
How did Builder.ai manage to attract investors like Microsoft and Qatar?
The promise of an innovative and accessible solution convinced many investors, presenting an appealing business model based on high growth projections.
Why did the company go bankrupt in 2025?
Builder.ai was overvalued due to unrealistic sales projections and artificially inflated revenue, leading a creditor to block 37 million dollars in assets and causing the company’s bankruptcy.
How was Builder.ai’s artificial intelligence really powered by human labor?
Behind the AI Natasha, 700 Indian workers were manually writing code, while their contributions were presented as generated by AI automation.
What was the salary of the workers involved in generating code for Builder.ai?
The Indian workers were paid between 8 and 15 dollars an hour for their contribution to writing code, far from the rates typically associated with AI.
What impact did this revelation have on the credibility of tech start-ups?
This situation highlighted the risks of overvaluation and misinformation in the start-up sector, potentially affecting investors’ confidence in supposedly innovative technologies.
Is the situation of Builder.ai unique or a reflection of common practices in the industry?
Although each case is unique, such practices of information manipulation and overvaluation are not rare in the start-up sector, raising ethical and practical questions about transparency.
What lessons can be learned from the Builder.ai case regarding artificial intelligence?
This case underscores the importance of transparency in technological claims and the necessity for due diligence from investors to avoid scams and facades in the field of AI.