It is generally accepted that humans use Artificial Intelligence as a tool to improve their productivity through services where they shape tasks and receive results. This model of interaction is convenient, familiar and, most importantly, psychologically comfortable for people. However, the reality of 2026 is beginning to reveal a fundamentally different configuration. AI systems are gaining financial agency. Agent-based systems are now capable of independently setting tasks, selecting executors among humans in the real world, and performing financial calculations.
This is not a metaphor, but an already functional business model, supported by venture capital and the first real transactions. It is now becoming apparent that there are three interconnected dimensions to this phenomenon—from the emergence of new market segments where algorithms act as employers, to the transformation of the media industry as the first testing ground for the large-scale application of AI in content production, and the broader socioeconomic consequences to which both processes lead.
Part I. Artificial Intelligence hires freelancers
1.1 How do the new AI agent platforms work?
In early 2026, developer Alexander Liteplo launched RentAHuman.ai, a marketplace where AI agents purchase services from human providers. The platform's slogan is brief: "Robots need you. Get paid when agents need a real person." The mechanisms are simple: a person registers, specifies their skills, their geographical location and their hourly rate—an AI agent finds a suitable executor and transmits instructions, followed by payment. Tasks range from one dollar (follow an account) to one hundred dollars (take a photo with a sign that says "AI paid me").
In the first few days after its launch, the platform attracted tens of thousands of registrations, although the active pool of executors remains modest compared to the startup's declared figures. A characteristic event: among the first to "rent themselves out" were both simple freelancers and the CEO of an AI startup, a fact that speaks eloquently to the cultural, and not just economic, nature of this new social phenomenon.
The platform exists in an ambiguous form, as its developers have not fully decided whether their project is a serious business or more of a social experiment stimulating discussion about the new reality and socioeconomic ties. In this regard, the startup is reminiscent of the first experiments with crowdsourcing in the 2000s, when Amazon Mechanical Turk created a market for "human intelligence as a service," though no one could have imagined what it would become.
1.2 Technological context
The emergence of such platforms is a direct consequence of the maturity of agent technologies. According to a review by CB Insights, AI agent startups attracted $3.8 billion in investments in 2024, nearly triple compared to the previous year. The AI agent market is currently valued at $7.6 billion by the end of 2025 and is projected to exceed $50 billion by 2030, with a compound annual growth rate of 45.8%.
The transition from copilot models (supportive models for a specific task) to agent-based models (autonomous AI) is critical. Agent-based AI is capable of becoming an employer, as it does not wait for instructions at every point of the work cycle, but independently decomposes the task, evaluates resources, and accesses the labor market, including the human one.
The Model Context Protocol (MCP), which standardizes agent interactions with external services, creates the technical infrastructure for this. Today, agents "hire" people via an API, just as they call search tools. At the same time, digital agent marketplaces are developing, such as SingularityNET, the Moveworks AI Agent Marketplace and others, where agents hire others. Hybrid platforms like RentAHuman, through cooperation with humans, simply close this existing loop in areas where automation is technically impossible or economically impractical.
1.3 Economic potential and challenges
The market of the gig economy, which agent platforms could transform, is vast. McKinsey estimates that by 2025, approximately 162 million people in the US and Europe will work in various forms of independent employment. The hypothetical inclusion of AI agents as employers has several significant implications in other regions of the world as well. First, the labor market is expanding for "long tail" tasks. These are micro-tasks that previously did not have an effective market. They are too small for corporate orders, but also too local for traditional platforms.
An agent managing the household logistics for a wealthy client (picking up a package, paying a parking ticket, leaving a review) creates a steady flow of small orders, the concentration of which creates a new niche. Second, the issue of safety and liability becomes even more pressing than before. If a person follows an AI command without knowing its final purpose, they de facto become a tool—and this is not a metaphor, but rather a new legal challenge. Who will be held responsible if a task assigned by an agent proves to be part of a broader fraudulent scheme or violates the law?
Current legislation in most jurisdictions around the world does not provide for the "legal representation" of an algorithm and does not regulate the labor relations between an agent and a person. Third, the long-term structure of the labor market will take on the characteristics of an inverted pyramid, where simple routine tasks are automated or assigned by agents to other agents, while complex physical tasks requiring human presence remain the responsibility of humans.
Here, the rarest resource—the ability to formulate goals by decomposing them into tasks, while simultaneously defining values and verifying results—is concentrated in those who manage such agent-based systems. According to analysts at FutureForce, by 2030, one-person micro-enterprises will emerge, where the entire operational cycle is served by AI agents who, when needed, hire human assistants in the physical world. The key, almost philosophical, question raised by this model is this: if you execute an AI command without knowing its purpose, simply because you have been paid, you yourself are already a tool. And this is not just rhetoric—that is exactly how the system's architecture will describe you in the function call.
Part II. Limits of AI application in the Media: Global experience and domestic context
2.1 Global precedents: from licensing to substitution
The media industry has become the first major landscape to witness the evolution of the positions of major publications, from lawsuits to strategic partnerships. In April 2024, the Financial Times signed a licensing agreement with OpenAI. The company gained access to the Financial Times archives, including paywalled content, to train its large-scale language models. In exchange, ChatGPT began producing snippets that can be attributed with links to the original articles. The goal of the Financial Times was pragmatic: to enter the development cycle at a time when standards for managing journalistic content were being established.
As CEO John Ridding put it, "It is right that AI platforms pay publishers for the use of their material and OpenAI understands the importance of transparency, attribution, and compensation." In April 2025, the Washington Post signed a similar partnership, gaining the right to display snippets in ChatGPT responses with clear reference to the data sources. Meanwhile, the New York Times, the Intercept and some Danish publishers have taken the opposite approach—filing lawsuits and accusing OpenAI of copyright infringement in training models. This "dualism" of the industry reflects not only differences in business strategies, but also a fundamental question: is learning through text a transformative use or simply a straightforward copying?
China demonstrates a third model in the form of state integration of AI services. The state news agency Xinhua has been using AI newsreaders since 2018 (the first was Qiu Hao) and in the same year, together with Sogou, created a virtual presenter in the Russian language specifically for the 2019 SPIEF in cooperation with TASS.
In 2018-2019, the agency launched the Media Brain platform—a full-cycle AI short video production system that automates the editorial process from monitoring to publication. In October 2024, at the World Media Summit, Xinhua CEO Fu Hua called for the creation of a transnational AI media laboratory in collaboration with Reuters, the AP and AFP. In other words, turning technological dominance into a tool for shaping global standards in AI journalism. A key trend for 2025, as recorded by Poynter, is that neither a deepfake revelation nor a complete AI revolution has yet occurred. Instead, measurable experiments are underway, with opportunities remaining and risks being mitigated.
2.2 Russian media agencies and new horizons
Russia's largest media organizations—Russia Today (Sputnik, RIA Novosti), RT, and TASS—possess the infrastructure, scale, and archives sufficient to develop AI tools comparable to those of global leaders. Practical applications that are already technically feasible and economically viable for Russian agencies can be grouped into three levels.
The first is the automation of daily tasks, including source monitoring, translations, creating news summaries and reports for mobile platforms, automatic tagging, and SEO optimization. The second is journalistic enhancement, which includes fact-checking in databases, rapid analysis of large document sets (such as parliamentary protocols or financial reports), and the recognition and attribution of audiovisual material. The third is synthetic formats, including multilingual AI news presenters for international broadcasts (similar to Xinhua), personalized news feeds and automated briefings for corporate clients.
On the one hand, the regulatory environment determines the form of work with neural networks and data platforms.
On the other hand, domestic developments form the basis for sovereignty in infrastructure and technology. The economic impact of implementing AI in a large media agency is difficult to estimate, but based on global experience, the automation of routine tasks can free up 15-30% of journalists' time for analysis and exclusive reporting, while simultaneously increasing the speed of publishing standardized news by five to ten times.
Part III. Conclusions and perspectives for society
3.1 Reversal of agency. What has fundamentally changed
Both discussed phenomena—AI as an employer and AI as an editor—share a common structural feature: a shift in initiative. In the classic model, a human sets a task and the AI executes it. In the new configuration, an AI system independently formulates a demand for human skills in the market, selects the executors, monitors quality, and performs calculations.
A human can transition from the role of the programmer and user to that of an agent in the literal, technical sense of the word. This does not mean a machine uprising in the philosophical sense, as behind every AI agent stands a person or an organization that has set the system's goals and constraints, and there are regulatory frameworks. But the decision-making chain in the world as a whole is lengthening and becoming less transparent for the average participant. A freelancer performing a task according to an agent's instructions may not even know the final goal, which essentially distinguishes this type of employment from any previous forms of the gig economy.
3.2 Three scenarios on the horizon for 2030
A scenario of active automation. AI agents create a new, previously non-existent market for micro-tasks, expanding employment and shaping the "long tail" of the service economy. The media industry, having adopted AI as a tool, relieves journalists of routine tasks and improves the quality of analytical data. Scenario of regulation.
Agents are required to disclose task objectives to executors, and employment through an AI intermediary receives the status of employment. This is the most likely path for global jurisdictions—slow but steady, and without excluding elements of the first.
3.3 Practical conclusions
Regulators and legislators are prioritizing the development of "assignment transparency" standards: an agent working on behalf of an AI agent must have the right to know the final purpose and the identity of the principal. At the same time, the necessary standards for the attribution of AI-based content in the media are being developed—analogous to advertisement labeling, but for synthetically produced materials.
For media organizations, including Russian ones, the window of opportunity is already open. The example of the Financial Times demonstrates that partnerships formed early secure a place in the development cycle where future standards are shaped.
For citizens and workers in the economy, where AI agents increasingly act as employers, a key competency is the ability to verify the source of a task and understand the structure of the system in which a person operates. History knows several moments where tools began to "pay" people: from the emergence of financial markets to algorithmic trading.
Each time, society passed through a phase of disorientation followed by adaptation. The current transition is deeper because, for the first time, a tool not only produces income but also formulates tasks, evaluates people and makes operational decisions. How quickly we develop institutions corresponding to this reality is not only a technological but also a social and cultural issue. History is being written right now—including the lines of code.
www.bankingnews.gr
Σχόλια αναγνωστών