The following is a guest post and opinion of Zac Cheah, Co-Founder of Pundi AI.
Fully autonomous AI agents are impossible. And rather than eating up jobs, autonomous AI agents create new work opportunities where humans assist AI agents’ functions throughout their lifecycle.
All autonomous AI agents in production or deployment stages require human action because they cannot operate independently, thereby creating job openings. Although AI agents operating at scale are beyond a single person’s cognitive capacities, each agent has multiple human-led teams in the development pipeline.
These agents need human developers to build the underlying infrastructure, code the algorithm, prepare human-labeled datasets for training, and oversee auditing procedures.
Since fragmented datasets lead to operational problems for autonomous agents, project teams have to clean data before training. Moreover, as data gaps can generate wrong output, developers must ensure an AI agent’s integrity and market positioning through rigorous evaluation. Each AI company thus requires human data cleaners, labelers, and evaluators to run its models.
Further, human-supervised audits provide necessary checks to prevent harm from autonomous AI agents acting rogue after deployment. Such defense mechanisms consist of elaborately tiered teams including company management, policy workers, auditors, and other skilled technicians. It takes a village to build and maintain an AI agent during its lifecycle. Thus, fully autonomous AI agents generate multiple job opportunities as human expertise is required to create, deploy, and evaluate these agents.
Humans’ experiences help them develop nuanced societal understandings, which in turn help them make logical inferences and rational decisions. However, autonomous AI agents cannot ‘experience’ their surroundings and will always fail to make sound judgments without human assistance.
So humans must meticulously prepare datasets, assess model accuracy, and interpret output generation to ensure functional consistency and reliability. Human evaluation is critical to identifying prejudices, mitigating bias, and ensuring that AI agents align with humanitarian values and ethical standards.
A collaborative approach between human and machine intelligence is necessary to prevent ambiguous output generation events, grasp nuances, and solve complicated problems. With humans’ contextual knowledge base, common-sense reasoning, and coherent deduction, AI agents will function better in real-life situations.
Besides computational power, AI models need high-quality data accessibility for model training and domain specialists to fine-tune data for efficient model performance. But megacorporations have monopolized control over human-generated data for building AI-ML models.
Pundi AI offers a decentralized data solution, providing equitable opportunities for everyone so that large companies don’t exploit data producers. Thus, humans can maintain control over their data and directly benefit from using it for AI model training, creating new AI-related job options.
Human intuition and creativity are key to developing new AI agents that can autonomously function in society without causing any harm. Besides enhancing autonomous AI agents’ general intelligence, human supervision ensures optimal performance for high-performing agents in independent settings.
Thus, a decentralized approach to building and deploying AI agents democratizes the AI industry by redistributing data and model training among people from diverse backgrounds, reducing structural bias, and creating new jobs.