Since the launch of ChatGPT by OpenAI in 2022, the development of AI has accelerated dramatically, ushering in a new era of mainstream adoption and innovation that is transforming how we work, live, and think.
That framing opened a one-hour ARiMI x RIMAS webinar featuring Marc Ronez, Andeed Ma, and Tianyu Xu. The discussion focused on moving beyond hype and fear to examine the risks, transitions, and opportunities we must understand to shape a safer, more effective, and more innovative future. It remained grounded in current practice and steered clear of speculative claims.
AI Within a Wider Risk Map
Marc first positioned AI within a portfolio of emerging risk issues that interact with one another, creating various possible scenarios for the future where AI often acts as a key amplifier. A live poll conducted at the beginning of the webinar on top AI risk concerns placed three issues at the top of the list:
- New cybersecurity vulnerabilities
- Unethical or criminal misuse of AI by bad actors
- Unregulated and uncontrollable growth of AI
The panel treated these as connected issues that should influence day-to-day choices rather than as isolated topics.
Focusing on AI deployment, Andeed pointed to a simple driver behind many AI failures. Teams are deploying AI tools too quickly while they are still learning how to use them. When literacy trails adoption, blind spots appear. People accept outputs they do not fully understand and move data in ways they did not intend. As Andeed noted, “It is not just about speed. It is about understanding what we are seeing and how we manage it.” He stressed the need to develop governance frameworks to assess and manage AI behaviour before applying the tools.
Tianyu added that the barrier to AI misuse is lower than many assume. Prompt-based probing and social engineering can bypass protections in automated systems, which means ordinary users now need to develop habits that were once the domain of security specialists.
The discussion shifted from enterprise risk to personal exposure, particularly scams involving voice cloning and deepfake video impersonation. These technologies can deceive victims into believing they are interacting with family, friends, or colleagues. Marc offered practical guidance to guard against such scams. First, as a principle, people should refrain from acting under pressure without proper review. Second, before sharing sensitive information, identity should be verified using methods that cannot be circumvented by AI. Marc suggested that families or teams establish a private code to authenticate a caller and distinguish human interaction from AI-generated impersonations. The use of simple control protocols can be highly effective in deterring scam attempts.
AI With Human Accountability in the Control Loop
The speakers agreed on a key principle: humans must remain accountable for oversight and outcomes when using AI. The challenge is how to achieve this effectively and at speed. Andeed described a containment approach in which AI helps govern AI, ensuring that controls can keep up with automation. He framed this as a design challenge, with solutions that must be continuously adjusted.
Tianyu focused on near-term practical concerns and offered a series of recommendations:
- verify accuracy before using AI-generated outputs,
- avoid leaking credentials or sensitive information through uploads and prompts, and
- build checks into ordinary workflows rather than relying on ad hoc vigilance.
The message was clear: stay in charge.
Marc reinforced the operational tone by suggesting that AI systems be treated as collaborators that augment human capabilities, while final decision rights and review steps remain clearly with people.
Roles Evolve as Task Mixes Change
When the conversation turned to jobs, the panel recognised that the current speed of AI adoption and innovation, particularly when combined with robotics, could eventually affect most human jobs and activities. However, they avoided sweeping predictions about the scale and pace of job displacement.
Tianyu reframed the issue as one of job evolution. In his view, jobs are changing, not disappearing. The focus should be on how the mix of tasks within a role is shifting.
Many routine elements of knowledge work can now be handled by AI agents, allowing people to concentrate on higher-value tasks such as:
- framing problems
- setting criteria
- synthesising results
- making decisions
The job remains, but the shape and content of the role evolve. What matters is whether organisations are investing in people to adapt to that change.
Andeed agreed and cautioned leaders against focusing too narrowly on AI automation as a cost-cutting measure, as this could result in the loss of tacit knowledge and organisational cultural values.
He also pointed to a perception risk that may slow down AI adoption. When AI-supported work is seen as lazy or incomplete, people may hide effective practices instead of improving them.
Marc supported this view and added that taking the easy road of over-reliance on AI could ultimately lower the quality of outcomes. He stressed that AI tools must be used in ways that augment human capabilities, not diminish them. When used well, AI is a powerful enabler that allows everyone to be augmented, whether to become a better artist, creator, chef, or professional in any field.
Skills That Matter Now for Humans
As the speakers reflected on the long arc of AI transformation, they shared a common concern: over-reliance on AI may weaken the very cognitive strengths that allow humans to learn and grow.
Two cognitive skills were emphasised. In an environment saturated with AI-generated content, Tianyu highlighted the importance of
- critical thinking to question outputs and claims, and
- creative thinking to avoid settling too quickly on the first plausible result.
Both skills help keep human judgment active.
Andeed predicted that reviewer and auditor habits will become essential for more people, not just specialists. He drew a parallel with how everyday cybersecurity awareness developed over time. People learned not to plug in unknown devices and to protect screens in public. Similarly, AI adoption will require new behaviours, supported by training and regular practice.
Marc suggested that generalist capabilities – the “jack of all trades” approach – will become a valuable human advantage in an AI-integrated workplace.
While AI may outperform human specialists in many technical areas, humans are still better equipped to manage complex and unrelated tasks. As specialised tasks become more AI-supported, value will shift to people who can connect ideas across disciplines, pose insightful problems, and integrate knowledge from multiple domains. Marc concluded that although AI offers the potential to augment everyone, not everyone will benefit unless they take the initiative to learn how to use it.
The panel agreed that those who do not use AI are likely to lose opportunities to those who do.
Time Horizons Without Overclaiming
The panel recommended viewing AI’s development and impact through short-, medium-, and long-termperspectives.
- In the short term, the concrete risks include misuse by bad actors, data leakage, hallucinated content, biases, and unverified claims.
- The medium term raises questions about operating models, the value of human work, governance structures, and organisational culture.
- The long-term future remains uncertain. Two scenarios were discussed without drawing firm conclusions. One is a future of sustained human-AI collaboration under well-designed safeguards.
The other is a future where AI systems pursue goals that may not align with human intent, with serious consequences. The emphasis was on proactive monitoring, ongoing research, and adaptable human safeguards.
Andeed urged the audience to consider demographic trends alongside technological ones. Many economies face population decline and a shrinking workforce. That reality strengthens the case for using AI to preserve productivity and service levels. In such contexts, AI becomes both a cost lever and a capacity lever.
What Remains Distinctly Human
When asked what defines human value in an AI-enabled environment, the speakers stayed close to what can be practised:
- Keep learning
- Keep combining knowledge in new ways
- Keep connecting people and ideas
- Keep adapting and innovating with intent
The professional who can move between domains and make useful connections will remain essential, especially because AI tools allow deeper and faster progress within each individual field. The capacity to orchestrate, integrate, and decide is what continues to set humans apart.
Closing Perspective
The conversation did not minimise short-term risks, nor did it claim certainty about the long-term future. It outlined a disciplined adoption pathway:
- Build AI literacy rapidly.
- Keep people firmly in the loop, with clear checkpoints and decision steps anchored in human responsibility.
- Redesign work boundaries so that, where appropriate, routine physical and cognitive tasks shift to AI, while final review and judgment remain with accountable people.
- Foster a culture that rewards thoughtful learning and review.
These are decisions leaders can make now.
This article summarises the one-hour webinar “Human + or vs AI… What is the Future?” held on 5 August 2025 at 4:00 PM.
If you could not attend, follow ARiMI Events or LinkedIn for the next live session. Bring your team and your questions, and invest one hour to turn practical ideas into actions you can apply the very next day.