People

Businesses

Article

Gege Gatt on AI and Society: Building a Future We Can Trust

Share This Article

Hailey Borg

We are entering an era where AI is no longer just a tool but a force shaping human identity and behaviour. The relationship between humans and machines must therefore be fundamentally reimagined: not as one of competition, but of coexistence. As the technology becomes deeply woven into our social fabric, we must establish a new social contract: one that ensures technology amplifies human potential, safeguards democratic values and distributes prosperity fairly. If we do not proactively shape AI’s trajectory, we risk allowing it to reshape us in ways we neither intended nor desire.

We stand at a pivotal juncture, possibly the most significant of this century, where our present choices will define the future relationship between humans and Artificial Intelligence (AI). By 2030, AI will be deeply embedded in how we work, govern, learn, and live.

This demands a rethinking of the social contract: the implicit agreement on rights and duties between citizens, governments, and the private sector. How do we ensure AI-driven technologies advance democracy and human welfare, rather than undermine them? How do we harness economic gains for all, preserve cultural richness, and uphold ethical principles in the face of rapid technological change?

In this post I propose a cohesive vision that balances socio-economic transformation, digital innovation and governance imperatives into a new blueprint for society.

A NEW SOCIAL CONTRACT FOR AN AI WORLD

The social contract has always been about balance: individuals cede some freedoms and effort in exchange for security, opportunity, and fairness provided by society and its leaders. An AI-saturated world calls for updating this contract. AI’s growing autonomy and decision-making power raises questions of accountability and benefit-sharing that our current norms don’t yet answer.

Responsibilities of governments will include safeguarding citizens’ digital rights and well-being in the face of AI disruption. This means governments must actively shape AI’s trajectory through policy, not passively let technology dictate social outcomes. As we develop more powerful AI, we need better tutelage of its trajectory and value-based involvement in its shaping. Democratic governments should establish transparency and oversight requirements so that critical AI decisions affecting citizens (from criminal justice to healthcare prioritisation) are explainable and aligned with the public interest. The EU AI Act is, for the moment, the best normative structure to achieve this. In turn, citizens will need to engage with these technologies responsibly; practicing digital literacy, resisting the spread of misinformation, and participating in debates on how AI is used in their communities.

Crucially, the new social contract must address the distribution of AI’s benefits. AI’s capabilities could drastically increase productivity and create enormous wealth, but without intervention, that wealth may accrue only to a small fraction (tech companies or countries), widening inequality. This is presently happening through technology consolidation in North America.

The old struggle between labour and capital is upended when AI requires no wage and is available at near zero marginal cost. Ideas like universal basic income or data dividends (paying people for the data that fuels AI) are on the table. Additionally, social safety nets and worker retraining programs must be strengthened to support those displaced by automation. In exchange, we must embrace continuous learning and flexibility as part of what we consider to be civic duty. Lifetime employment in one role is a thing of the past.

A new social contract should enshrine the idea that human values and agency remain paramount in the AI age. AI should amplify our humanity, not diminish it. This means humans must retain ultimate control and responsibility for AI systems’ actions.

Calling AI “neutral” or value-free is dangerous. AI is a human endeavour guided by values and we must not allow it to become an excuse to escape human responsibility. In practice, this means the creation of legal frameworks where AI developers and operators are accountable for outcomes, and mechanisms to audit and correct AI decisions that violate ethical or legal norms. The EU’s AI Liability Directive is laudible in this regard as it introduces a new regime that ensures legal certainty to handle claims for damage caused by AI.

It also means agreeing that certain uses of AI are off-limits: for example, autonomous weapons that make life-and-death decisions without human oversight, or mass surveillance systems that eradicate privacy. By clearly defining these boundaries now (much as society defined rules for nuclear technology or biotechnology in earlier eras), we assert that technology must abide by collectively determined values.

As AI becomes more powerful, the intersection of governance, democracy, and ethics becomes a critical arena. Democratic institutions worldwide face a dual challenge: leveraging AI to improve public services and decision-making on one hand, while defending against AI-driven threats to the democratic process on the other.

One urgent concern is the impact of AI on information ecosystems and electoral politics. Today’s algorithms determine which news we see, and generative AI can produce hyper-realistic fake content. Democracy can be eroded by a manipulative mechanism which rewards falsehoods and deceptions in our online media environment. Personalised feeds, driven by engagement metrics, tend to amplify sensational or misleading information: truth is often the casualty. This leaves citizens misinformed and distrustful, creating a two-fold threat to democracythat has been documented by the Edelman Trust Barometer. The report finds that, across the globe, 69% of respondents feel that Government leaders purposely mislead people (up from 58% in 2021). By 2030, without corrective action, AI-powered disinformation could undermine the very notion of a shared reality that democracy requires.

Policy response is needed to align AI-driven media with democratic values. This might include requiring social platforms to deploy AI for fact-checking and content moderation in a transparent, accountable way. Supporting independent fact-checking services and public-interest algorithms can help correct falsehoods, but these systems themselves must be auditable and fair. C2PA is a step in this direction. The goal is transparent AI in media, tools that make underlying values explicit and encourage companies to take responsibility for algorithmic decisions. Europe’s efforts in this direction (such as the EU’s Digital Services Act) are early examples of governance trying to rein in algorithmic harms while preserving free expression.

Another governance issue is accountability for AI decisions in public administration. When governments use AI; for example, to allocate welfare benefits or assess visa applications, the stakes are high. An algorithmic error or bias can unjustly impact lives. Thus, democratic governance requires that AI systems used by the state are thoroughly tested for bias and their decision logic can be explained in human terms. A ‘human in the loop’ framework for human accountability and agency over AI outcomes is a good first step. This could mean every government AI system has a clear “owner” (a team or agency) responsible for its oversight, and affected citizens have the right to appeal AI-driven decisions to such human-led authority.

With this oversight, Governments must commit to bring the power of AI into its operations to create more responsive, efficient public services by 2030. Today, the public sector significantly lags the private sector in AI adoption. To catch up, governments should develop a bold digital transformation strategy: appoint a Chief AI Officer at Cabinet level and build public-sector AI systems (for example, to streamline administrative processes, predict infrastructure needs, or personalise citizen services). Research suggests that with currently available AI tech, governments could eliminate service backlogs, shorten waiting times, and improve long-term planning within a single election term if they deploy AI widely. It also implies training civil servants in AI ethics and oversight, so that the public sector remains a trustworthy steward of AI tools.

Importantly, ethical guardrails must be built into AI design and deployment from the start. That is the role of AI ethics guidelines being developed around the world: from the OECD’s AI Principles to UNESCO’s recommendations. By 2030, we can expect ethical AI assessment to become as routine as financial auditing, especially for high-impact AI (used in policing, hiring, healthcare, etc.). Democracies may even codify an “AI Bill of Rights” a set of rights for citizens in algorithmic interactions (such as the right to explanation, freedom from biased decisions, and control over personal data). Here again, Europe is leading the way through the EU AI Act.

We should also guard against techno-authoritarianism. Not all governments will use AI in ways that bolster democracy; some may use it to reinforce autocracy. The social contract in an AI era thus demands vigilance that fundamental freedoms are not traded away for promised efficiency or security. Citizens should insist that AI is used to assist governance, not replace it.

Algorithmic tools can help analyse policy impacts or optimise resource allocation, but final decisions must remain with human legislators and judges who can be held accountable by the people. In other words, we cannot “outsource” moral and political judgment to machines. Maintaining this principle is key to preserving human agency in governance.

With thoughtful governance, AI can be a tool to strengthen democracy – making governments more responsive and effective – rather than a weapon that weakens it.

Perhaps nowhere is the impact of AI more palpable than in the economy. As we approach 2030, AI and automation are reconfiguring labour markets, industries, and wealth distribution at an accelerating pace. This economic upheaval carries both opportunities for greater prosperity and risks of greater inequality, depending on how we respond.

Fears that “robots will take all the jobs” coexist with hopes that AI will free humans from drudgery. The truth lies somewhere in between. In the present term, AI is automating specific tasks more than entire jobs however AGI will upend that. By the end of this decade many roles: especially routine and low-skill jobs will be displaced. By one estimate, activities accounting for up to 30% of hours worked in the U.S. could be fully automated by 2030, a trend accelerated by recent advances in generative AI . OECD cites similar findings. These numbers are significant, but they also indicate most jobs will not vanish outright. Instead, their content will shift, and new jobs will emerge.

AI’s effect on employment can be understood in three ways :

Displacement effect: AI and robots directly substitute some jobs or tasks previously done by people (e.g. automated customer service Virtual Agents replacing call center operators).

Skill-complementarity effect: AI creates demand for new jobs to develop, manage, and maintain AI systems, or it enhances existing jobs (e.g. data scientists, AI ethics officers, or a doctor using AI diagnostics). AI extends human capabilities rather than replaces them.

Productivity (and demand) effect: By improving productivity and lowering costs, AI can increase disposable income and create indirect opportunities. For example, cheaper production might reduce prices, so consumers spend savings on other goods/services, spurring job growth in those sectors.

Importantly, these effects unfold over time, not all at once. Early on, displacement may dominate; later, new roles and industries materialise. The automation of tasks (which is unfolding now), occurs before the automation of jobs. This gives societies a window to adapt. History shows technology usually creates as many jobs as it destroys (though not necessarily for the same people, in the same places, or with the same skills). Indeed, multiple studies project that AI will ultimately roughly create as many jobs as it displaces, albeit different ones. The challenge is avoiding painful frictional unemployment and ensuring people can move into the new, often higher-skill roles.

Industrial policy and economic strategy must therefore focus on smoothing this transition. Governments should invest in sectors where AI can drive growth (such as healthcare, green energy, advanced manufacturing) while also supporting industries likely to be hard-hit by automation. National AI strategies, already adopted by dozens of countries, often include funding for AI research, incentives for AI startups, and infrastructure for digital innovation. The goal is to ensure one’s country is among those leveraging AI for economic expansion, rather than falling behind. A new world order is already emerging: those countries able to leverage AI, accelerate societal and economic growth; and those who are unable to do so. No nation wants to be left on the wrong side of that divide, with stagnant industries and talent exodus.



Nations need to fundamentally embrace AI and automation to increase the supply of essential goods and services, driving down costs and improving living standards. For instance, advancing modular construction (with AI-driven design and fabrication) can deliver homes 50% faster and at lower cost, addressing housing shortages in Europe. Similarly, supporting AI-enabled drug discovery and healthcare can improve public health. Governments must spur this development by funding R&D, streamlining regulations, and incentivising deployment in sectors that improve quality of life (housing, energy, transportation, healthcare). The goal is a high-productivity, “abundant” economy where prosperity is broadly shared, rather than a winner-take-all scenario.

There is also an opportunity to re-humanise work in this AI era. If rote tasks are delegated to machines, humans can refocus on what we do best: creative, interpersonal, and strategic work. However, seizing this opportunity requires that our education and training systems equip people with those very human-centric skills.

EDUCATION FOR AN AI-TRANSFORMED SOCIETY

Education is both a shield and a sword in the AI age: a shield against technological unemployment and disenfranchisement, and a sword that enables society to harness AI’s opportunities. Preparing society for AI’s disruptions and possibilities starts in the classroom and continues throughout one’s life.

Redesigning education for 2030 means teaching students not just to use AI tools, but to thrive alongside them. This entails a few shifts:

Critical thinking and adaptive learning: In a world where AI can answer factual questions instantly, the value of human education shifts to learning ‘how to learn’ and how to question and synthesise (a skill I first observed in the polymath philosopher Peter Serracino Inglott). Employability is now less about what you know, and more about your ability to learn over time. Schools and universities should focus on developing cognitive flexibility, problem-solving, and critical analysis. Students should learn to critically evaluate AI outputs (for example, spotting when a ChatGPT answer might be incorrect or biased) rather than taking them at face value.

We ought to provide a personalised AI Tutor for Every Student. We can leverage AI to provide individualised tutoring and support, supplementing human teachers and democratising high-quality education. Policymakers can commit to initiatives that integrate AI tutoring systems into education, ensuring every student has access to one-on-one help. We now have the technology (as demonstrated by The Khan AcademyAxon Park and others) to overturn the factory-model classroom. AI can deliver in-depth tutoring in almost all subjects with proven gains in student performance. By 2030, a realistic goal is that each student gets a personal AI assistant (aligned to curricula and vetted for accuracy) alongside teacher-led instruction. This is especially relevant in under-resourced schools. Importantly, these AI tutors would work in tandem with educators, handling routine practice and feedback, while teachers focus on higher-level mentorship, critical thinking discussions, and socio-emotional learning.

And in equal meaure we must support teachers and professors in using AI to enhance teaching rather than fearing replacement. One policy idea is to embed AI as a teaching assistant in classrooms: for grading automation, generating personalised lesson plans, or identifying students who need extra help, so that educators can spend more time on one-on-one mentoring and creative instruction. By 2030, the role of the teacher must evolve into more of a facilitator and coach, guiding students in how to think critically and use AI effectively.

To enable this, education systems must involve teachers in the design and rollout of AI systems in schools (addressing their concerns and insights). Policymakers should also guard against the over-automation of teaching: maintain reasonable class sizes and human staffing even as AI tools arrive, so that human connection in education isn’t lost. Teachers could be incentivised (with grants or career advancement credits) to innovate in their pedagogy through AI. Ultimately, a teacher+AI team can outperform either alone; combining the empathy and inspiration of humans with the data-driven personalisation of AI. Policies that nurture this synergy (rather than viewing AI as replacing teachers) will lead to better educational outcomes

STEM and AI literacy for all: While not everyone needs to be a programmer, basic understanding of how AI works is becoming essential civic knowledge. Concepts like algorithms, data privacy, and machine bias should be covered in curricula, so that future citizens can engage in informed debate about AI policy. Moreover, encouraging more youth to pursue STEM careers will expand the workforce that can develop and manage AI: an economic imperative. However, this push must be inclusive: initiatives to break down barriers for underrepresented groups in tech (women, minorities, disadvantaged communities) are crucial so that the creators of AI reflect the diversity of its users.

Ethics and humanities integrated with tech education: To ensure human-centric AI design, tomorrow’s engineers and data scientists need a strong grounding in ethics, philosophy, and social science. Educational programs are beginning to combine AI courses with ethics modules, case studies on societal impacts, and interdisciplinary projects. By 2030, the norm should be that anyone building AI has contemplated its potential effects on privacy, fairness, and human rights. This nurtures a generation of technologists who prioritise responsible innovation over moving fast and breaking things.

The social contract around education must shift. Futurists predict employers and educational institutions will increasingly co-design programs to meet evolving skill needs. Policymakers can facilitate this by accrediting non-traditional learning (online courses, bootcamps, workplace training) and creating qualification frameworks that recognise skills acquired outside formal degrees.

Beyond formal education, lifelong learning and reskilling will be the norm. Governments and businesses must provide avenues for mid-career workers to gain new skills as old ones become obsolete. This could involve public-private partnerships offering free or subsidised training in digital skills, bootcamps, or on-the-job apprenticeships for using AI tools. Some countries are exploring concepts like educational leave: allowing workers to take paid time off to upskill or retrain . By 2030, taking a year at mid-career to learn new skills might be as common as a sabbatical is in the student community today. Such continuous learning culture is key to ensuring workers can transition between the jobs AI eliminates and the ones it creates.

Digital literacy for society at large is equally important. AI is not just in workplaces; it’s in our homes, phones, and pjazzas. Citizens need to know how AI systems might influence them; for instance, understanding that a social media feed is curated by algorithms (and how to adjust settings for a healthier information diet). Public awareness campaigns and community courses can help people of all ages become savvy users of AI, rather than passive consumers. In the 20th century, societies pushed for universal literacy; in the 21st, we may need universal AI literacy.

premium

Would you like to upgrade to premium?

upgrade personal profile

upgrade business profile

Our Premium Partners

Connecting businesses one meet at a time.