Every major technological shift has raised a familiar question: what does this mean for people?
With Artificial Intelligence, that question feels sharper, because for the first time we are building systems that can reason, decide, and create alongside us. The concern is often framed as replacement, but in practice AI is changing something more subtle: the shape of human work, not the need for humans themselves.
We are moving into an era of shared intelligence, where human judgment, context, and values combine with machine scale, memory, and precision. In this configuration, neither side stands alone. People define goals, constraints, and ethics. Systems carry out the heavy analysis and execution.
In this section, we will examine how that partnership works in practice, what it means for jobs and skills, and how organisations can design collaboration models that keep human responsibility and agency at the centre while making full use of AI’s capabilities.
1. The Myth of Replacement
When people first hear about AI, they often imagine a single outcome: robots taking over jobs and humans being pushed aside. The headlines can reinforce that fear. In recent years Amazon announced around 14,000 corporate job cuts in 2025 after already cutting about 27,000 roles between late 2022 and early 2023, explicitly linking part of this restructuring to investments in generative AI and automation. PwC has cut roughly 1,500 roles in the United States in 2025 and several thousand more globally as it restructures, slows hiring and pivots investment into technology areas such as AI, automation and data analytics.
It is important to understand what is happening in these cases. These layoffs are not simply AI replacing “people” in a general sense. They result from a mix of factors: prior overexpansion during boom years, lower than usual staff turnover, pressure from clients to lower prices when work is partially automated, and a strategic decision to shift budgets from labour intensive services into AI infrastructure and new products. Companies are removing layers of routine coordination and process work, while at the same time hiring aggressively in new areas such as AI engineering, data science, cloud operations and AI governance.
If we step back from the current news cycle, history tells a more nuanced story. From the printing press to the industrial revolution to the arrival of personal computers, each major automation wave has displaced certain tasks and even whole occupations, but it has also created new industries, new roles and, over time, more total employment, not less. Work by economists such as David Autor shows that automation both substitutes for some kinds of labour and complements others, raises output and creates demand for new services, which in turn generates new jobs.
Modern research on AI follows a similar pattern. An OECD review of AI and labour markets finds no clear evidence of an overall decline in employment in occupations exposed to AI so far. Instead, AI is reshaping the content of jobs and increasing wage growth in some highly exposed roles, particularly those involving complex cognitive tasks. The IMF estimates that about 60 percent of jobs in advanced economies will be affected by AI, but stresses that roughly half of these are likely to benefit from AI integration through higher productivity rather than straightforward elimination.
What actually changes is not whether humans work, but what kind of work they do. AI systems are well suited to activities that are repetitive, highly structured or heavily data based. Examples include reconciling transactions, screening documents, generating first drafts of standard reports, scheduling, triaging routine customer queries or extracting fields from forms. When these tasks are automated, they do not vanish into a void. They move out of the daily workload of people, which frees capacity for activities that remain distinctly human:
- Creative work, such as designing new products, campaigns, services or strategies.
- Relational work, such as building trust with clients, mentoring junior staff, negotiating complex deals and leading teams.
- Judgment and ethics, such as deciding how to apply rules fairly, which risks are acceptable and what outcomes align with an organisation’s values.
Evidence from PwC’s own 2025 Global AI Jobs Barometer, based on nearly one billion job postings, suggests that roles which use AI intensively are actually growing faster and often pay more than roles that do not, particularly in fields like software, finance, healthcare and professional services. This aligns with the idea that AI tends to reconfigure jobs into higher value combinations of human and machine capability rather than simply erasing them.
So it is accurate to say that AI is contributing to layoffs at firms such as Amazon and PwC. It is not accurate to conclude that AI will simply “take all the jobs.” In practice, AI is taking over the chores inside jobs: the repetitive, mechanical components that can be translated into rules and patterns. The remaining work shifts toward creativity, strategy and human connection, where people still provide something machines cannot generate on their own.
2. The Strengths of Each Side
To build effective collaboration between people and AI, we first need a clear view of what each contributes. Humans and machines are not competing versions of the same thing. They are strong at different kinds of thinking, and the best results come when those strengths are deliberately combined.
Where humans excel
Creativity and imagination
Humans can invent concepts that have never existed before, combine ideas from distant fields, and deliberately break patterns. A designer can create a new visual style that does not match any template. A founder can imagine a business model no data set has ever seen. This kind of open ended, speculative thinking is rooted in experience, culture and emotion, not only in pattern matching.
Emotion, empathy and trust
People can sense tone, intention and unspoken meaning in ways current AI cannot. A nurse reading a patient’s fear, a manager noticing a team’s morale, or a mediator defusing a conflict all rely on emotional understanding and lived experience. Humans do not only process information. They relate to one another and build trust over time.
Ethics, values and responsibility
Deciding what is acceptable, what is fair and what trade offs are justified is a human task. AI systems can highlight risks, surface options and simulate consequences. They cannot choose which values to prioritise. Questions such as who should receive limited medical resources, how to handle biased data, or when to override an automated decision are ultimately ethical and civic choices.
Strategy and leadership
Humans can set direction in conditions of uncertainty, weigh conflicting objectives and choose what matters most. Leaders integrate financial results, team wellbeing, brand reputation, regulation and long term risk into a single judgment. Strategy is not only about optimising a metric. It is about deciding which metrics are worth pursuing.
Rich contextual understanding
People understand history, culture, local norms and individual circumstances. A lawyer interprets a contract differently for a small family business than for a multinational. A teacher adapts the same lesson for two classrooms with different social dynamics. This depth of context allows humans to interpret the same data in different, appropriate ways.
Where AI excels
Calculation and scale
AI can carry out vast numbers of calculations and look across millions of records without fatigue. It can recompute projections as soon as inputs change. Where a human analyst might examine hundreds of rows, an AI model can scan millions in seconds. This does not make it wiser. It does make it extremely capable at heavy numerical work.
Speed and logical consistency
Once trained and configured, AI systems respond quickly and apply the same rules every time. A model that checks transactions against regulatory thresholds will not lose focus late in the day or treat similar cases differently. This consistency is especially valuable in high volume, rule based environments such as claims triage, document classification or quality checks.
Pattern recognition in large data sets
AI is very good at finding structured and unstructured patterns that are invisible to the human eye. It can detect weak correlations across many variables, such as combinations of symptoms, customer behaviours or sensor readings that precede a failure. This allows it to support early warning systems in healthcare, finance, logistics and other fields.
Data analysis and prediction
Given sufficient historical data, AI can forecast likely outcomes with useful accuracy. It can estimate which customers may churn, which shipments may be delayed or which students may need extra help. The predictions are not perfect, but they provide a data informed starting point for human decision makers.
Reliable execution of repetitive tasks
AI powered systems can generate first drafts, fill in forms, route tickets and extract fields from documents at scale. They do not get bored. They do not slow down. This makes them ideal for the repetitive components that exist inside many professional roles.
Clear comparisons: how the roles differ
For each pair, the distinction becomes clear.
-
Creativity vs calculation
Humans originate new concepts and narratives. AI explores and refines within patterns it has already seen. A human strategist might design a new product category. An AI model can then test hundreds of possible price points and bundles for that product.
-
Emotion and empathy vs logic and speed
Humans read feelings and nuance. AI processes inputs quickly and consistently. In a support centre, AI can triage tickets and route them correctly. Human agents then handle conversations where reassurance, apology or delicate negotiation are needed.
-
Ethics and morality vs consistency
Humans decide what is right for a company, community or society. AI enforces the chosen rules at scale. A compliance team defines what counts as a high risk transaction. The AI system applies that framework to every transaction without drifting standards.
-
Strategy and leadership vs raw data analysis
Humans choose the goal. AI helps explore how to reach it. Leadership decides to prioritise long term customer trust over short term revenue. AI then helps find which changes in process or product reduce complaints or increase satisfaction measures.
-
Contextual understanding vs pattern recognition
Humans interpret patterns in context. AI finds the patterns quickly. For example, an AI tool might flag that a particular neighbourhood has rising default risk. A policy team adds local knowledge about industry closures, housing conditions or community history before deciding how to respond.
Synergy in practice
When collaboration is designed well, the relationship looks like this:
-
A doctor using AI to detect tumours
The AI system analyses thousands of images and highlights suspicious areas with a probability score. The doctor brings medical training, understanding of the patient’s history and ethical responsibility for the final decision. The model reduces the chance of missed signals. The doctor decides what to do.
-
A marketer using AI to predict behaviour
AI segments customers, forecasts response rates and generates variations of copy or creative. The marketer uses brand understanding, cultural awareness and strategic goals to choose which campaigns to run, which audiences to protect from over targeting and how to position the message.
-
A teacher using AI to personalise lessons
The platform tracks each student’s progress and recommends exercises at the right difficulty level. The teacher reads classroom dynamics, motivates students, explains concepts in different ways and decides when a learner needs encouragement rather than another quiz. The system supports precision. The teacher provides meaning and connection.
In each case, the strongest results come not from AI working alone or people ignoring AI, but from human judgement guided by machine intelligence. The machine handles breadth, speed and repetition. The human handles purpose, interpretation and care.
3. The 3 Levels of Human–AI Collaboration
Collaboration between people and AI does not appear fully formed. It develops in stages, as capability grows and as organisations build the skills, processes, and safeguards needed to use these systems responsibly. A useful way to think about this evolution is through three levels: assisted work, augmented work, and autonomous systems.
Level 1: Assisted Work
At the first level, AI functions as a helper inside existing tasks. It supports the human, but it does not change who is responsible or how the work is structured.
What it looks like
- Email filters that sort messages into folders or mark potential spam.
- Grammar and style checkers that suggest edits to documents.
- Recommendation engines that surface products, articles, or videos.
- Search tools that rank results and highlight the most relevant entries.
Here, AI proposes. The human decides.
What it takes to work effectively
-
Basic digital readiness
People need comfort with software tools and interfaces. They must know how to interpret suggestions, accept or reject them, and adjust preferences.
-
Transparency of suggestions
Systems should show clearly what they changed or proposed. For example, a grammar tool highlights edits rather than silently rewriting text. This allows users to evaluate and learn from the assistance.
-
Clear boundaries
Assisted tools should not trigger actions with legal or financial consequences without explicit confirmation from the user. For instance, a system may draft a reply but should not send it automatically.
-
User training and trust
Workers need simple guidance: when to trust the tool, when to double check, and how to report errors. Trust at this stage is built through repeated experience that the tool helps rather than harms.
At this level, AI removes friction. It reduces small errors and saves time on micro tasks, but humans clearly remain in full control of decisions and outcomes.
Level 2: Augmented Work
At the second level, AI becomes a co-pilot. It does more than assist. It takes on meaningful parts of the job, while humans concentrate on higher level judgement, strategy, and relationships.
What it looks like
- Doctors using diagnostic models that flag early signs of disease on scans before the human eye might notice them.
- Financial analysts using predictive models to forecast risk, price scenarios, and portfolio behaviour.
- Lawyers using AI to review large document sets, identify relevant clauses, and propose initial summary arguments.
- Project managers using AI systems that simulate timelines, predict bottlenecks, and suggest resource allocations.
Here, decisions are shared. The AI supplies structured insight. The human interprets and signs off.
What it takes to work effectively
-
High quality data and domain expertise
Reliable augmentation rests on accurate, representative data and strong human expertise. Doctors, lawyers, or analysts must understand both their field and the limitations of the model. They are not simply “pressing a button”. They are using the output as one piece of evidence among several.
-
Model validation and monitoring
Before deployment, models need to be tested on separate data sets and real cases. Organisations must track performance over time, measure error rates, and check for bias or drift. This requires collaboration between data teams, domain experts, and risk or compliance teams.
-
Clear decision protocols
There should be agreed rules about when human judgement can override the model and when escalation is required. For example, a hospital might require a specialist review for any AI generated diagnosis that significantly changes a treatment plan.
-
User skills in interpretation
Professionals must be trained not only to use the tools, but to understand confidence levels, uncertainty, and context. They need to know when a prediction is strong, when it is weak, and what additional information they should seek.
-
Ethical and legal frameworks
Because AI now influences significant decisions, organisations need policies about fairness, explainability, privacy, and documentation. Records of how AI was used in a decision should be available for review.
At this level, AI changes the shape of the job. Routine analysis moves to the machine. Humans move closer to roles that integrate technical insight with ethical, relational, and strategic thinking.
Level 3: Autonomous Systems
At the third level, AI systems can act independently within carefully defined limits, with humans setting goals, rules, and oversight structures.
What it looks like
- Self driving vehicles that control steering, acceleration, and braking on specific routes under defined conditions.
- Algorithmic trading systems that execute large volumes of buy or sell orders according to predefined strategies and risk thresholds.
- Robotic surgery systems where robots perform precise movements, with surgeons supervising and intervening when necessary.
- Industrial robots in warehouses or factories that navigate spaces, coordinate tasks, and avoid collisions with minimal direct control.
Here, the AI executes actions in real time. Humans design objectives, constraints, and monitoring, and step in when the system reaches a boundary or behaves unexpectedly.
What it takes to work effectively
-
Precise definition of scope and environment
Autonomous systems perform best in environments that are clearly described and reasonably predictable. This often means starting with constrained domains such as specific road types, factory floors, or logistics yards, rather than open, unconstrained situations.
-
Robust safety engineering
Safety must be built in at multiple levels. This includes fail-safe mechanisms, conservative default behaviours, emergency stop functions, and continuous sensor checks. Designers must anticipate failure modes and plan how the system should respond.
-
Human in or on the loop
Even when the system acts independently, humans should remain involved in monitoring and oversight. “Human in the loop” refers to humans who approve or veto critical actions. “Human on the loop” refers to humans supervising overall system behaviour and ready to intervene when needed.
-
Regulation and accountability structures
Autonomous systems often operate in regulated domains such as transport, healthcare, and finance. Effective use requires compliance with laws, clear assignment of responsibility, logging of decisions, and mechanisms for investigation and redress when things go wrong.
-
Continuous evaluation and learning
Autonomous systems must be updated as conditions change. This involves collecting operational data, reviewing incidents and near misses, and adjusting models or rules. Feedback from users, regulators, and affected communities should inform this process.
-
Organisational readiness and culture
Teams need to be trained to work with autonomous systems, including understanding their limits, responding to alerts, and interpreting logs. Leadership must set expectations about when to trust automation and when to pause or roll back.
At this level, AI systems take on sustained, independent activity. However, accountability does not transfer to the machine. It remains with the humans and institutions that design, approve, deploy, and supervise the system.
Across all three levels, the direction of travel is the same: AI handles more of the repetitive and computational burden, while humans take on a clearer role as designers, supervisors, and stewards of intelligent systems. Trust and collaboration deepen as organisations gain experience and build safeguards, but the responsibility for outcomes ultimately remains human.
4. The New Workforce — Blending Skills and Systems
In the coming decade, the modern workforce will not be neatly separated into “tech people” and “everyone else”. Almost every profession will combine human expertise with the use of intelligent systems, in the same way that almost every office job today involves computers and the internet.
Working effectively in that environment means understanding not only your own discipline, but also how to cooperate with AI tools. The expectation will not be that everyone becomes a programmer, but that professionals in law, finance, healthcare, engineering, education, and public service can use AI in a precise, responsible, and informed way.
Future professionals will need to:
-
Understand how AI systems work
This does not mean knowing every mathematical detail of a model. It means understanding the basics: how AI is trained, what data it uses, what its limitations are, where it is strong, and where it is fragile. A manager should know the difference between a rule based system and a learning system. A doctor should know that a diagnostic model is a probabilistic tool, not an oracle. This foundational understanding prevents blind trust as well as unnecessary fear.
-
Communicate effectively with digital tools
AI systems respond to instructions, context, and constraints. Professionals will need to learn how to specify tasks clearly, provide the right information, and refine outputs through iteration. This includes skills like prompt design, structured questioning, and giving feedback that improves future responses. In practice, this is similar to learning how to brief a junior colleague: the clearer the instruction, the better the result.
-
Design, interpret, and correct automated outputs
AI will frequently generate first drafts, analyses, or recommendations. The human role will be to decide how these should be used. This includes checking outputs against domain knowledge, spotting obvious errors or bias, deciding when extra data is needed, and reshaping results for real world use. A lawyer might correct the structure of an AI drafted clause. A teacher might adapt an automatically generated exercise for a specific class. The skill lies in treating AI output as a starting point, not as a final product.
-
Balance logic with ethics and emotion
AI systems are good at optimising for explicit objectives such as speed, accuracy, or cost. They do not understand fairness, dignity, or long term social impact unless humans deliberately encode these concerns. Professionals will need to weigh efficient answers against ethical and human considerations. For example, a hiring algorithm might rank candidates in a way that is statistically predictive but socially unacceptable. A human decision maker has to recognise this tension and adjust the system or override its suggestions.
As a result, AI literacy will become as fundamental as computer literacy became in the late twentieth century. Being able to read, write, and operate a word processor or spreadsheet once marked the difference between fully participating in modern work and being left behind. In a similar way, being able to reason about AI, collaborate with it, and direct it will increasingly define who can contribute at a high level in data rich, knowledge based environments.
Knowing how to think with AI, rather than only around it or against it, will be one of the central professional capabilities of the twenty first century.
5. Building Symbiotic Intelligence
The next stage of collaboration is not simple cooperation, it is symbiosis.
In biology, symbiosis describes two organisms that live together in a way that benefits both. Each partner supplies something the other lacks. Over time, both organisms adapt to each other so closely that they cannot reach the same level of fitness alone. The relationship becomes a shared survival strategy rather than a temporary convenience.
We are starting to see the outline of a similar relationship between humans and AI. At its core, this relationship has three movements:
-
AI learns from human insight.
Human experts provide goals, examples, corrections, and constraints. When a doctor corrects a misclassified scan, when a financial analyst adjusts a forecasting model, or when a teacher rewrites an AI generated lesson, that human intervention becomes training signal. The system refines its internal parameters and improves. In economic terms, human judgment is continuously converted into reusable intellectual capital that can scale across time zones and markets.
-
Humans grow through AI feedback.
At the same time, AI systems expose patterns that humans cannot easily see. They can reveal blind spots in decision making, inefficiencies in workflows, or biases in historical data. A manager who sees that an AI model consistently predicts lower churn for a certain type of customer may rethink product strategy. A policymaker who observes that a model highlights systematic underinvestment in specific regions may redesign funding formulas. The AI becomes a mirror that reflects the structure of our own decisions back at us, often with uncomfortable clarity.
-
Both become smarter through continuous exchange.
Each adjustment on one side triggers adaptation on the other. Human corrections improve the model, and improved models change how humans work, which generates new data, which further improves the model. This is a closed learning loop. At the level of a firm or an institution, that loop can be thought of as a new production function: capability is no longer produced only by adding labour and capital, it is also produced by the rate of mutual learning between people and machines.
Viewed at scale, this suggests a new way to describe economic potential. Traditional theory focuses on factors such as capital stock, labour supply, technology level, and human capital. In an AI mediated economy, a new factor becomes visible: the symbiosis coefficient of an organisation or a country, which is the degree to which human institutions, skills, incentives, and infrastructure are designed to support this two way learning between human and machine.
An organisation with a high symbiosis coefficient does not simply deploy AI tools. It structures work so that every interaction generates useful feedback, every decision is traceable and improvable, and every model output is reviewed by people who have both domain expertise and AI literacy. Over time, such organisations become self improving systems. Schools refine their teaching strategies as students and AI tutors learn from each other. Hospitals shorten diagnosis and treatment cycles as clinicians and diagnostic models co-evolve. Cities adjust transport, energy, and safety policies in near real time as human planners interpret streams of AI derived insight and feed new goals back into the system.
In this view, the most competitive institutions of the future will not simply be the ones with the largest models or the most data. They will be the ones that design the tightest, most ethical, and most productive symbiotic loops between human judgment and machine intelligence.
6. Challenges of Collaboration
As collaboration between humans and AI deepens, the opportunities grow, but so do the risks. From a global economic and policy perspective, these risks are not side issues, they are structural questions that will shape productivity, inclusion, and stability over the coming decades.
Overreliance
When systems appear accurate and convenient, people tend to stop questioning them. In an AI mediated economy, this can lead to a quiet transfer of judgment from humans to models. If loan officers, doctors, judges, or civil servants defer automatically to AI outputs, society drifts toward automated decisions without real accountability. Managing overreliance means building cultures and regulations that keep humans in an active supervisory role. Professionals must be trained to ask where a result came from, what its assumptions are, and when to override it. Institutions will need audit processes that test decisions against human and social values, not only against technical performance.
Transparency
Many advanced models are difficult to interpret. They can produce useful predictions without providing a clear explanation of how they arrived at a specific outcome. This creates tension in areas like finance, healthcare, and law, where citizens expect to understand decisions that affect their lives. At the level of global governance, this raises questions about trust, contestability, and due process. Addressing transparency is not only a technical challenge. It requires standards for documentation, explanation tools appropriate for non specialists, and legal frameworks that define when a decision must be explainable in human terms before it can be enforced.
Bias
AI systems learn from data, and data reflects existing social and economic structures. If the training data contains historical discrimination, unequal access, or skewed representation, the resulting systems can reinforce those patterns at scale. From a World Economic Forum perspective, this is not just a fairness issue within individual firms, it is a risk to social cohesion and inclusive growth. Reducing bias requires diverse data sets, careful model evaluation across groups, and governance mechanisms that give affected communities a voice. It also demands that leaders treat fairness as a design objective, not as a purely technical afterthought.
Skill gaps
As AI becomes a central production technology, the gap between those who can work with it and those who cannot will influence wage inequality and national competitiveness. Workers who lack AI literacy may find themselves locked out of higher productivity roles. Regions that underinvest in skills may struggle to attract or grow AI enabled industries. Closing these gaps calls for coordinated investment in education, re skilling, and lifelong learning, from primary school curricula to executive training. Policy makers, employers, and educators will need shared strategies to ensure that gains from AI adoption do not concentrate only in a narrow segment of the workforce.
Ethical dilemmas and regulatory lag
Innovation cycles in AI move quickly. Legal and regulatory frameworks move slowly. This creates periods where powerful capabilities exist without clear rules on their deployment. Questions about surveillance, manipulation, autonomous weapons, and large scale displacement are already visible. If governance fails to keep pace, trust in both institutions and technology can erode. The task for governments, international bodies, and industry coalitions is to build agile regulatory models that protect core rights while allowing responsible experimentation. This includes impact assessments, shared safety standards, cross border cooperation, and mechanisms to pause or redirect applications that create systemic risk.
These challenges are not arguments for rejecting AI. They are signals that maturity is required. A world that relies on intelligent systems needs equally intelligent oversight. The practical response is a combination of awareness, structured education, institutional design, and shared accountability. Humans remain responsible for setting goals, defining boundaries, and correcting course when needed. AI extends capacity, but people and institutions remain answerable for the outcomes.
7. The Future of Collaboration
In the near future, most teams will include both people and AI systems working side by side. Digital agents will join meetings as analytical contributors, prepare summaries, track action items, and surface relevant data at the right moment. Everyday workflows will feature virtual co workers that monitor processes, forecast outcomes, and coordinate handoffs between departments, so that many important decisions grow out of a structured dialogue between human insight and machine analysis.
From a policy and governance perspective, the central challenge is to guide this hybrid model so that it strengthens productivity, protects human dignity, and supports social stability. Organisations that succeed will design teams where creativity, ethical judgment, and lived experience are combined with computation, memory at scale, and precise pattern detection. In that setting, AI becomes part of a balanced division of labour, and institutions can widen their impact without losing the human qualities that build trust and legitimacy.
To realise this potential, institutions such as the World Economic Forum could encourage a set of guiding principles and practical recommendations:
-
Treat AI agents as teammates with clearly defined roles
Organisations should specify where AI contributes (for example analysis, drafting, monitoring, or recommendation) and where humans retain sole authority (for example values, priorities, hiring, sanctions, and final accountability). Hybrid team design should be intentional, not improvised tool use.
-
Require human ownership of outcomes
In every workflow, responsibility for results should remain with a named human or governance body. AI may inform, simulate, and propose. Humans sign off, explain decisions, and answer to regulators, stakeholders, and the public.
-
Standardise “AI participation rules” for meetings and decisions
As digital agents join meetings, common norms are needed. For instance: when AI generated material must be labelled, how recommendations are documented, how dissenting human views are recorded alongside model outputs, and how sensitive topics are handled. This protects both transparency and trust.
-
Invest in AI literacy at all levels
Boards, executives, managers, and frontline workers need a shared baseline of understanding. They should know what AI is doing in their workflows, how to question its output, and when to escalate concerns. AI literacy frameworks can sit alongside existing financial and digital literacy initiatives and should be treated as a strategic competency, not a niche skill.
-
Embed hybrid performance metrics
Evaluation systems should measure not only technical efficiency but also the quality of human work that AI enables. Key indicators might include decision quality, error reduction, employee engagement, inclusion, and long term resilience. This encourages organisations to use AI to enhance meaningful work rather than simply to reduce headcount.
-
Promote experimentation within safe boundaries
Global forums can encourage pilot projects in healthcare, education, public administration, and industry, while defining minimum safeguards for privacy, bias testing, and explainability. The goal is to learn how hybrid teams function in practice and to share patterns that work across countries and sectors.
If this evolution is managed deliberately, hybrid human AI teams can deliver what most organisations and societies are striving for:
- Faster and more comprehensive thinking, through continuous access to structured analysis.
- Smarter and more robust decisions, through the combination of human judgment with machine scale evidence.
- More meaningful human work, as people move away from repetitive processing and toward roles that require empathy, creativity, interpretation, and leadership.
The task for global institutions is not only to encourage the adoption of AI, but to help design the collaborative fabric in which people and intelligent systems can work together safely, productively, and in service of shared human goals.
10. Real-World Transformation
Now that we’ve explored how AI transforms work, industries, ethics, and society, you’ve reached the end of the first true “application” module in Stage 1.
You’ve seen the full scope of what AI can do — from saving lives to reshaping cities to redefining meaning itself.
Case Study: AI Supported Breast Cancer Screening in Sweden
1. Context
Breast cancer screening programs in Europe rely heavily on mammography and double reading.
In double reading, two radiologists independently review each mammogram to reduce the chance of missing cancer.
Sweden has one of the most established national screening programs in the world. Women between 40 and 74 are regularly invited for mammograms. This creates two pressures at the same time:
- Very high image volumes every year.
- A shortage of experienced breast radiologists, especially outside major cities.
By the late 2010s, Swedish cancer centers faced a familiar set of problems:
- Growing screening demand as the population aged.
- Limited capacity to recruit and train new radiologists quickly.
- Risk of burnout among existing specialists.
- The need to maintain or improve cancer detection without increasing false positives.
Against this backdrop, researchers and clinicians in Sweden started testing whether AI could safely support or partially replace one of the human readers in the double reading workflow.
The most influential project so far is the MASAI trial (Mammography Screening with Artificial Intelligence), a large randomized controlled trial embedded directly into the Swedish national screening program.
2. The Problem
The core challenge had three parts:
- Workload and capacity
- Every mammogram needed two independent readings.
- In the trial, the control group alone required over 83 000 individual screen readings for around 40 000 women.
- Radiologists were spending a large share of their time on routine screening, leaving less time for complex cases, procedures, and multidisciplinary meetings.
- Detection and safety
- The system had to keep cancer detection at least as good as standard double reading.
- Regulators and clinicians were concerned about false negatives (missed cancers) and false positives (unnecessary recalls and anxiety).
- Scalability
- Any new approach had to work inside an existing national program, with real patients and real constraints, not just in a lab.
- The solution needed to be robust across equipment types, clinical teams, and different Swedish regions.
The question was:
Can AI safely support or replace one radiologist in double reading,
while maintaining or improving cancer detection
and reducing workload in a measurable way?
3. The AI Solution
3.1. The system
Swedish centers evaluated a commercial breast AI system (for example Transpara from ScreenPoint Medical in the MASAI trial) that uses deep learning to analyze digital mammograms and produce:
- A suspicion score for each breast or exam.
- Visual heatmaps indicating regions that appear suspicious.
- A triage decision about whether a human radiologist needs to read the image at all.
The model was trained on hundreds of thousands of mammograms with known outcomes. During training it learned to distinguish between:
- Exams that ultimately showed cancer.
- Exams that were normal.
It did this by repeatedly predicting, comparing against ground truth, and adjusting internal parameters (backpropagation) until its error rate became very low.
3.2. How the trial was structured
In the MASAI randomized trial:
- Total participants:
- 80 033 women in the Swedish screening program.
- Groups:
- Control group: standard double reading by two radiologists.
- Intervention group: AI plus one radiologist.
- Workflow in the AI group:
- The AI system scored each mammogram.
- Exams with a very low risk score were automatically cleared by AI plus one human read.
- Exams with higher suspicion were read by both AI and the radiologist, and if needed, escalated for additional review.
In practical terms, the AI replaced a large fraction of the second radiologist readings while still allowing human experts to overrule or review AI suggestions where necessary.
3.3. Safety mechanisms
To satisfy clinical and ethical standards, several safeguards were built into the design:
- The primary endpoint was interval cancer rate (cancers missed at screening that appear before the next screening round). The AI strategy had to be non inferior to standard care on this measure.
- Radiation dose and imaging protocols remained unchanged.
- Radiologists had full authority to recall patients regardless of AI score.
- The trial was monitored by independent committees and funded by public research bodies like the Swedish Cancer Society and regional cancer centers, which helped ensure neutrality and rigorous oversight.
4. Impact
4.1. Detection performance
In the interim analysis of MASAI:
- Cancers detected
- AI group: 244 screen detected cancers.
- Control group: 203 screen detected cancers.
- This is about 20 percent more cancers detected with AI supported screening.
- Cancer detection rate
- AI group: 6.1 cancers per 1 000 women screened.
- Control group: 5.1 cancers per 1 000 women screened.
- False positive rate
- False positives were 1.5 percent in both groups.
- That means AI did not increase the number of women who were called back unnecessarily.
These numbers are important because they answer the key clinical question:
Does AI miss more cancers or cause more false alarms?
In this trial, the answer so far is no. Cancer detection improved while the false positive rate stayed essentially the same.
Other European studies, including recent German real world implementations, report similar trends: higher cancer detection rates (roughly 15 to 20 percent increases) without a significant rise in false positives when AI supports radiologists.
4.2. Workload and efficiency
The MASAI trial also measured radiologist workload:
- Total screen readings
- Standard double reading group: 83 231 readings.
- AI group: 46 345 readings.
- Workload reduction
- This corresponds to a 44.3 percent reduction in screen reading workload for breast radiologists.
In other words, almost half of the reading work was removed, while detection performance improved.
This has several practical implications for healthcare systems:
- Radiologists can spend more time on:
- Complex diagnostic work up.
- Interventions and procedures.
- Multidisciplinary case discussions.
- Screening programs can expand capacity without recruiting twice as many specialists, which is extremely difficult in many countries.
- Waiting times for patients can be reduced, since screening backlogs are easier to clear when each radiologist reads fewer routine exams per day.
These findings are strong enough that the UK NHS has launched what is expected to be the world’s largest breast screening AI trial (about 700 000 mammograms) using similar principles, explicitly citing the Swedish data as evidence that AI can reduce workload by around half without increasing harm.
4.3. Clinical and patient impact
From a patient point of view, the transformation looks like this:
- Earlier detection:
- More cancers found at the screening stage rather than later when symptoms develop.
- Earlier cancers are usually smaller and more treatable, which improves survival and can allow less aggressive treatment.
- Stable false positives:
- The risk of being called back unnecessarily has not increased. In MASAI, the false positive rate remained around 1.5 percent in both groups.
From a system perspective:
- Safety: AI supported screening has been judged feasible and safe in multiple prospective and real world studies in Europe, with interval cancer rates and false positive rates staying within accepted thresholds.
- Scalability: National programs can realistically consider moving from double human reading to one radiologist plus AI, without sacrificing quality.
4.4. What this tells us about AI in healthcare
This case shows several key principles of real world AI transformation in healthcare:
- AI can strengthen, not weaken, safety
- It is used as an additional reader or triage system that supports specialists, rather than as an opaque replacement.
- Value is created in two dimensions at once
- Clinical value: more cancers caught at screening, similar or lower false positive rates.
- Operational value: around 40 to 45 percent reduction in specialist workload, which directly affects capacity, waiting times, and staff burnout.
- Integration matters as much as the algorithm
- The trial succeeded because AI was fully integrated into the existing screening workflow, with clear escalation rules and human override, rather than being bolted on in isolation.
- Evidence based deployment is possible
- This was not a small pilot. It was a randomized controlled trial inside a national program, with ongoing follow up on interval cancers and long term outcomes.
For policymakers and health systems, the Swedish experience is a concrete example that:
- AI can safely become part of core clinical infrastructure, not just a research toy.
- When governed properly, it can deliver measurable benefits in quality, efficiency, and access at the same time.
Case Study: How BlackRock’s Aladdin Transformed Institutional Risk Management
1. Context
Modern finance runs on large, complex portfolios.
Pension funds, insurers, sovereign wealth funds, and central banks hold thousands of securities across equities, bonds, derivatives, and alternatives, often spread across different internal teams and external managers.
By 2020, BlackRock’s Aladdin platform was already used to support risk and portfolio management for about 21.6 trillion US dollars in assets, across roughly 30,000 portfolios for clients such as CalPERS, Deutsche Bank, Prudential and several central banks.
In 2019 the Bank of Israel, which manages the country’s foreign exchange reserves, adopted Aladdin Risk to help oversee and stress test its reserve portfolio.
This gives us a concrete institutional case, rather than a generic “AI in finance” example.
2. The Problem
Institutions like the Bank of Israel faced several structural issues in reserve and balance sheet management:
- Fragmented data and tools
- Risk, performance, and compliance were often handled in separate systems.
- Different asset classes were modelled in different tools, which made it hard to see the true risk of the total portfolio on a single screen.
- Slow and inflexible scenario analysis
- Running stress tests such as “What happens if interest rates jump by 200 basis points, equity markets drop 20 percent, and credit spreads widen” often required manual modelling that could take days.
- This limited the number of scenarios that could be explored before major decisions, especially in volatile markets.
- Growing complexity of risks
- Reserve managers needed to understand not only market and interest rate risk, but also liquidity risk, credit risk, and new dimensions like climate and ESG exposure.
- Traditional tools struggled to incorporate many correlated risk factors at once and to keep models up to date with current market data.
- Regulatory and governance pressure
- After the 2008 financial crisis, regulators and boards expected more frequent and more sophisticated risk reporting, including detailed “what if” scenarios for extreme events.
- Producing these reports with legacy systems was labour intensive and error prone.
In short, the institution had plenty of data but lacked a unified, intelligent system that could turn it into fast, reliable, and forward looking risk insight.
3. The AI Solution: Aladdin Risk
Aladdin (Asset, Liability, Debt and Derivative Investment Network) is BlackRock’s integrated risk and portfolio management system. It uses large scale data processing, stochastic simulation, and machine learning style analytics to provide a real time view of risk across entire portfolios.
For the Bank of Israel and similar clients, the implementation focused on several capabilities:
a) Unified data and risk view
Aladdin ingests positions, trades, benchmarks, and reference data for all asset classes into one common data model.
That allows reserve managers to see, in a single interface:
- Positions and exposures
- Performance and attribution
- Factor sensitivities such as duration, spread risk, equity beta
- Limits and compliance flags
Central Banking’s award write up describes Aladdin Risk as providing daily transparency on portfolio positions, performance, risk, scenario analysis, compliance, and oversight, and enabling institutions to “rethink and redefine portfolio management”.
b) Advanced scenario analysis and stress testing
Aladdin uses Monte Carlo style simulation and historical or hypothetical scenarios to estimate portfolio behaviour under thousands of possible futures, including extreme events such as a pandemic or a major default.
Clients can ask questions such as:
- How would our reserves react if US Treasury yields rose by 150 basis points and credit spreads widened?
- What happens to our portfolio under a repeat of the 2008 crisis or a specific geopolitical shock?
The system then calculates impacts on value, risk measures such as Value at Risk, and key ratios, all within a unified framework.
c) AI assisted risk monitoring
Aladdin continuously processes large volumes of market and portfolio data.
Using statistical and machine learning techniques, it can:
- Highlight positions or portfolios whose risk profiles are drifting away from mandates.
- Detect concentration and correlation risks that are not obvious from headline exposure numbers.
- Provide “what if” analyses for changes in allocation, duration, or hedging strategy.
This turns risk management from a periodic, backward looking exercise into an ongoing, forward looking process.
d) Institutional scale and cloud infrastructure
Aladdin runs on large scale compute infrastructure and increasingly uses cloud platforms such as Microsoft Azure and Snowflake, which allows it to handle very large portfolios and complex simulations at speed.
BlackRock reports that Aladdin now supports technology solutions for more than 1,000 clients through its platform, across banks, insurers, asset managers, corporates, and central banks.
For a central bank or large asset owner, this means they can run risk calculations that would have been impractical or extremely slow on older, in house systems.
e) Concrete institutional adoption
The Bank of Israel publicly highlighted its use of Aladdin Risk for reserve management. In BlackRock’s materials, senior risk analyst Roee Levy described Aladdin as “a major step for improving and promoting our risk management”.
Other institutions such as CalPERS, Deutsche Bank, Prudential, and several corporate treasuries also rely on Aladdin for their risk analytics, which indicates that this is not a one off experiment but a widely adopted infrastructure.
4. The Impact
Because many institutions do not publish exact internal efficiency figures, we combine reported facts with documented qualitative impacts.
a) Scale and systemic importance
- By 2020, Aladdin was used to manage or analyse around 21.6 trillion US dollars in assets globally.
- BlackRock states that Aladdin technology is used by more than 1,000 clients, including banks, insurers, pension funds, wealth managers, corporates, and central banks.
This scale means that a significant portion of global institutional finance now runs risk management and portfolio analytics through one AI enabled platform.
b) Risk management quality
For reserve managers such as the Bank of Israel, centralbanking.com and BlackRock report several concrete improvements:
- Daily, portfolio wide transparency on positions, exposures, and risk, instead of fragmented views.
- More robust scenario analysis, including ESG and climate related feeds, that allows institutions to explore a wide range of macroeconomic and stress scenarios in a consistent framework.
- According to the Bank of Israel quote, Aladdin represents “a major step for improving and promoting our risk management”, suggesting a qualitative leap in how risk is monitored and communicated to senior decision makers.
Although exact percentages are not disclosed, the move from manual, model by model stress testing to a unified AI driven engine clearly reduces operational risk and response time in crises.
c) Operational efficiency and decision speed
Case descriptions from BlackRock and independent analyses emphasise that Aladdin:
- Consolidates multiple legacy systems into one platform, which reduces manual data reconciliation and the risk of inconsistent figures in different reports.
- Allows thousands of “what if” scenarios to be generated and summarised for portfolio managers and risk committees, instead of a small number of hand built scenarios.
This has two practical effects:
-
Faster decisions
Risk and investment committees can test more options before adjusting allocations or hedging, because scenario results are available more quickly.
-
More informed decisions
Managers see how their portfolios behave under a wide range of simulated futures, including low probability, high impact events such as pandemics or severe credit shocks.
d) Business and ecosystem impact
From a business perspective:
- BlackRock reports that its technology business, centred on Aladdin, represents about 8 percent of company revenue, more than 1.5 billion US dollars, and continues to grow as more institutions adopt the platform.
- Recent deals, such as Citigroup’s decision in 2025 to move roughly 80 billion US dollars of client assets to be run using Aladdin Wealth, show that large banks now treat AI enabled platforms as core infrastructure rather than optional tools.
For the wider financial system, the presence of a common, AI driven risk engine across many major institutions has three notable consequences:
-
Higher baseline standards for risk analytics
Even smaller institutions that cannot build Aladdin scale systems in house can access advanced risk tools through the platform, which raises the overall quality of risk management.
-
Faster propagation of best practices
New risk methodologies, climate scenarios, or stress testing frameworks can be rolled out across many clients at once, instead of each institution reinventing the wheel.
-
Shift in human roles
Risk teams spend less time collecting and cleaning data, and more time interpreting results, designing scenarios, and advising boards and regulators. This is a practical example of AI shifting humans from mechanical analysis to strategic judgement.
Case Study: Skyline AI and JLL
How Machine Learning Reshaped Commercial Real Estate Investment
1. Context
For most of the twentieth century commercial real estate investment relied on three things: relationships, local knowledge, and spreadsheets.
Investment teams would:
- Collect limited public data on sales and rents.
- Build bespoke Excel models for each asset.
- Spend weeks visiting properties, calling brokers, and debating assumptions in committees.
Even when more digital data became available, it was scattered across public records, brokerage reports, demographic sources, and proprietary spreadsheets. Analysts simply could not absorb it all.
This created a structural problem. Investors often sat on large amounts of undeployed capital, commonly called “dry powder”. By 2019 institutions were holding more than 300 billion US dollars in real estate dry powder because they struggled to identify and underwrite enough attractive deals with traditional methods.
In this environment an Israeli start up called Skyline AI emerged. The company built an AI platform for institutional grade real estate that attracted backing from Sequoia Capital, JLL, and DWS, and was later acquired by JLL to support its global investment and asset management business.
2. The Problem
Institutional investors and lenders faced three related challenges:
-
Limited and biased analysis
Traditional underwriting could only use a small subset of available data. Analysts focused on a handful of variables such as rent, occupancy, and a small group of comparables. Important signals such as population shifts, school quality, crime statistics, tenant quality, or future infrastructure plans were either ignored or only considered subjectively.
-
Slow deal cycles
Underwriting a single multifamily or commercial asset could take several weeks. By the time a model was complete, a competing investor might already have secured the deal. Industry research on AI underwriting notes that traditional CRE analysis often takes days or weeks, which sharply limits the number of opportunities that a team can evaluate.
-
Underused capital
Because of data complexity and speed constraints, firms frequently failed to identify viable assets in time. Skyline AI’s own analysis highlighted that investors were leaving billions in returns on the table as capital sat idle while teams searched for deals.
The core question for investment managers became:
How can we evaluate thousands of assets, across many markets, with more data and less time, without sacrificing discipline?
3. The AI Solution
Skyline AI built a machine learning platform aimed at answering exactly that question.
3.1 Data foundation
The platform aggregates and normalizes a very large set of data sources for United States institutional real estate. Public material indicates that Skyline AI:
- Ingests more than 100 separate data sources.
- Processes over 10 000 data points per asset, covering financials, physical characteristics, demographics, leasing, and market behaviour.
These sources include:
- Public property and tax records.
- Historical transactions and rent rolls.
- Demographic and income data.
- Points of interest, transit access, and school quality.
- Macro indicators such as interest rates and employment patterns.
The system cleans and aligns these heterogeneous inputs, then builds a time series history for each asset and its surrounding market.
3.2 Predictive models
On top of this data layer the platform runs multiple machine learning models that estimate:
- Current market value for each asset.
- Future value and cash flow potential.
- Risk metrics such as probability of distress or unusually high vacancy.
- Relative attractiveness compared with similar assets in other submarkets.
Commercial coverage describes Skyline AI as “sequencing the DNA of real estate” in order to highlight assets whose asking price sits below the AI estimated market value.
The output is not a single number but a structured insight package: value ranges, confidence levels, drivers of upside, and identified anomalies.
3.3 Workflow integration
The system is designed to plug directly into investment and lending workflows:
-
Deal sourcing
The platform scans entire markets and flags assets where expected performance materially exceeds what traditional pricing suggests. Investment teams receive ranked opportunity lists instead of raw property tables.
-
Underwriting
For a selected asset, AI generated value predictions, rent projections, and operating expense expectations feed into underwriting models. Industry analyses of similar tools report time savings of up to 98 percent in underwriting, reducing analysis time from weeks to minutes while maintaining or improving accuracy.
-
Portfolio management
Investors can run “what if” scenarios across existing portfolios, exploring sale timing, refinancing options, and repositioning strategies based on model forecasts.
3.4 Partnerships and institutional adoption
Skyline AI did not operate in isolation. It formed strategic partnerships with:
- DWS Group, a global asset manager, which invested in the company and integrated its models into DWS real estate investment processes.
- Greystone, a major real estate finance firm, to enhance loan underwriting using Skyline AI valuations and risk analysis.
- JLL, one of the largest global real estate services firms, which first invested via JLL Spark and later fully acquired Skyline AI to embed the technology across its capital markets and asset management activities.
One widely cited milestone was a 26 million US dollar multifamily acquisition in the United States where Skyline AI’s algorithms identified and underwrote the asset based primarily on its models. The transaction was presented as an “algorithm based real estate deal” that validated AI driven underwriting in practice.
4. Impact
Because many institutional deals are private, detailed performance figures are not publicly disclosed. However, a combination of Skyline AI disclosures and independent industry studies allows a clear picture of impact.
4.1 Speed and deal volume
AI underwriting platforms used in CRE have demonstrated:
- Up to 98 percent reduction in underwriting time, with analysis cycles collapsing from weeks to minutes.
- The ability for investment teams to evaluate significantly more opportunities per quarter, which directly increases the chance of finding above market returns.
When such tools are integrated in firms like JLL or DWS, this speed becomes a strategic advantage in competitive bidding environments.
4.2 Accuracy and risk management
Skyline AI and similar platforms bring three performance advantages:
-
More comprehensive data
By incorporating thousands of variables per asset, AI models capture signals that traditional models would ignore. Examples include subtle demographic changes, infrastructure announcements, or small shifts in rent growth at the block level.
-
Consistent valuations
Machine learning models apply the same logic to every asset. This reduces the variance that arises when different analysts interpret the same information in different ways.
-
Earlier detection of mispricing and distress
Case material reports that Skyline AI can surface assets priced below AI predicted value, as well as properties that show early signs of financial stress, long before conventional metrics flag problems.
Industry commentary notes that this type of insight helps investors deploy capital that would otherwise remain idle, while also avoiding overpaying in overheated submarkets.
4.3 Organisational transformation
The acquisition of Skyline AI by JLL illustrates how AI has begun to transform real estate at the organisational level:
- JLL integrated Skyline AI into its global advisory, capital markets, and valuation services, so that AI derived insights now inform decisions for many billions of dollars in assets under management.
- JLL research on AI in commercial real estate reports that nearly 90 percent of C suite leaders expect AI to solve major challenges across pricing, portfolio optimisation, and building operations, which confirms how central these tools have become to strategy.
In practical terms this means:
- Fewer manual data gathering tasks for analysts.
- More time spent on judgement, negotiation, and structuring.
- Tighter integration between investment, lending, and asset management teams who now work from a shared AI enriched view of the market.
Case Study: Coca Cola “Create Real Magic”
How Generative AI Turned a Global Brand Into a Creative Platform
1. Context
Coca Cola is one of the most recognizable brands on earth, but by the early 2020s it faced a familiar problem for legacy brands:
- Younger audiences were spending more time creating content than passively watching ads.
- Social feeds were saturated with video and graphics from thousands of brands.
- Traditional TV centric campaigns were losing attention and cultural relevance.
To refresh its “Real Magic” brand platform, Coca Cola partnered with OpenAI, Microsoft and Bain & Company to build an AI powered creative environment called Create Real Magic. The platform combined GPT 4 for text capabilities with DALL·E for image generation, wrapped in a custom web experience that allowed fans and digital artists to generate original artwork using official Coca Cola visual assets.
This was not a small experiment. Coca Cola positioned Create Real Magic as a flagship global initiative, with the promise that the best AI generated artworks would appear on high profile digital billboards such as Times Square in New York and Piccadilly Circus in London.
2. The Problem
Coca Cola needed to solve several marketing challenges at once:
-
Relevance with younger, creator oriented audiences
Gen Z and younger millennials are used to participating in culture through memes, edits, remixes, and fan art. Traditional top down storytelling was no longer enough to keep the brand central in their feeds.
-
Global consistency with local resonance
Coca Cola markets across more than 200 countries, with very different cultures, languages, and aesthetics. It needed a way to keep its brand platform coherent while still feeling personal and relevant in different regions.
-
Creative scale without creative fatigue
Classic campaigns rely on a limited set of hero assets. In a social media environment that demands constant novelty, static creative quickly feels repetitive. Coca Cola needed a way to generate huge volumes of distinctive content without losing brand control.
-
Proof that AI can enhance creativity rather than replace it
As AI entered public debate, brands faced a trust problem. Coca Cola wanted to show that AI can augment human creators instead of sidelining them, and that responsible AI use can deepen brand storytelling.
3. The AI Solution
3.1 The Create Real Magic Platform
Coca Cola launched the Create Real Magic web platform in March 2023. The idea was simple:
- Any eligible user could log in and access a curated library of official Coca Cola brand elements such as the contour bottle, Spencerian script logo, vintage Santa illustrations, and the polar bear characters.
- Users typed text prompts describing the scene or idea they wanted to create.
- Behind the scenes, GPT 4 helped refine prompts and DALL·E generated images that blended user ideas with the official brand assets, all running on Microsoft Azure infrastructure.
The platform was initially available in the United States, parts of Europe, Australia and several other regions. Thousands of digital artists and fans submitted artwork. Selected pieces were displayed on large format digital billboards in Times Square and Piccadilly Circus, and across Coca Cola owned channels.
Key aspects of the solution:
-
Human in the loop creative direction
Coca Cola creative teams, agencies and AI specialists set strict brand guardrails. The AI models were configured so that every output respected brand codes such as color, typography and iconography. Human curators chose which works would appear in high impact placements.
-
Generative co creation at scale
Instead of producing a fixed set of campaign visuals, Coca Cola turned the entire internet audience into a distributed creative department. AI provided the technical capability to render new brand compliant images on demand, while the crowd provided the imagination.
-
Integration with subsequent campaigns
Learnings and infrastructure from Create Real Magic were used in later generative AI initiatives, including an AI powered holiday card and “snow globe” experience that let users create personalized Christmas scenes with Coca Cola branding.
3.2 Technical and Marketing Logic
From a technical perspective:
- The platform used generative AI models to transform text prompts into images.
- Azure AI handled large scale compute and content delivery, so millions of users could experiment without performance issues.
From a marketing perspective:
- The campaign reframed advertising as participatory storytelling. People did not just watch a Coca Cola ad; they helped make it.
- Every generated artwork created an opportunity for organic social sharing, since creators were encouraged to post their images on social platforms for visibility and recognition.
4. Impact
4.1 Quantitative Outcomes
Microsoft and independent academic work on AI in digital marketing report several concrete results for Create Real Magic and its follow on holiday experiences:
- More than 1 million users interacted with the AI powered Santa and holiday experience across 43 markets in just three weeks.
- The campaign generated thousands of pieces of user generated artwork, enough to support a sustained presence on social platforms and digital out of home screens.
- Analysis of Coca Cola’s AI brand storytelling notes that the AI powered holiday campaign outperformed prior traditional campaigns on engagement metrics, with higher interaction rates and longer average time spent in the experience, although exact percentages are not publicly disclosed.
- A marketing research report on AI in digital marketing channels cites Create Real Magic as a reference case where social media interactions and co creation drove “interactive and long lasting brand experiences” that extended reach beyond paid media.
While Coca Cola has not published detailed revenue numbers tied solely to this campaign, external analysts highlight that the initiative delivered measurable return on engagement: large scale participation at relatively low incremental creative cost, plus a significant uplift in organic impressions.
4.2 Qualitative and Strategic Impact
Beyond the numbers, Create Real Magic reshaped Coca Cola’s marketing posture in several ways:
-
Reframed the brand as a creative collaborator
Instead of treating fans as passive consumers, Coca Cola invited them into the creative process. This strengthened emotional connection and positioned the brand as culturally fluent among digital creators.
-
Demonstrated a practical pattern for AI powered brand storytelling
The campaign became a widely cited example in marketing and academic literature of how generative AI can support interactive, user generated brand campaigns rather than just automating copywriting or ad placement.
-
Balanced innovation with brand safety
By constraining generation to official assets and using human curation, Coca Cola avoided most of the reputational risks associated with uncontrolled AI imagery, while still benefiting from novelty and scale. Researchers point to Create Real Magic as a case where responsible AI use contributed to competitive advantage.
-
Created a reusable AI marketing infrastructure
The same Azure based generative stack now underpins later Coca Cola AI campaigns, including AI enhanced holiday storytelling and AI enabled music experiments in Coke Studio.
Case Study: Khan Academy and Khanmigo in Newark Public Schools
1. Context
Newark Public Schools in New Jersey serves a largely low income, racially diverse student population and has been working for years to close persistent gaps in mathematics achievement, especially after pandemic related learning loss.
In 2021 the district partnered with Khan Academy to roll out a district wide math program that combines the existing Khan Academy practice platform with Khanmigo, an AI powered tutor and teaching assistant. The program initially focused on grades 3 to 8 and was implemented at scale across the district.
Khanmigo is built on top of large language models and is designed specifically for education use. It acts as a guided tutor for students and as a planning and insight assistant for teachers rather than a general purpose chatbot.
2. The Problem
Newark faced several structural challenges that are typical in many school systems:
-
Low math proficiency and slow recovery after Covid
Standardized test scores showed that many students were performing below grade level, and progress in math was lagging behind state averages after pandemic disruptions.
-
Teacher time constraints
Teachers had limited time to design differentiated practice, analyze data, and give individual feedback to each student. Many classes contained a wide range of abilities, which made it difficult to support both struggling and advanced learners within the same lesson.
-
Equity of support
Students who needed the most support were not consistently able to access one to one tutoring. Extra help depended on staff availability, after school programs, or family resources.
The district needed a solution that could provide high quality individualized practice and feedback at scale, while keeping teachers in control of instruction.
3. The AI Solution
Newark implemented a blended model that combines human teaching with AI assistance:
a) Student side: AI powered tutoring and practice
Students use Khan Academy for structured math practice aligned with state standards, while Khanmigo acts as an interactive tutor inside the platform.
- Khanmigo helps students work through problems step by step, asking guiding questions instead of simply providing answers.
- The system encourages explanation and reasoning, prompting students to explain their thinking and offering hints when they get stuck.
- Practice is adaptive. As students master skills, the platform recommends the next appropriate concepts, and if they struggle, it surfaces prerequisite material.
This gives each student a form of persistent one to one support that would be impossible to provide using only human tutors at district scale.
b) Teacher side: AI assisted planning and insight
Teachers receive detailed dashboards that aggregate student data from Khan Academy and Khanmigo:
- Reports show which skills individual students and entire classes are struggling with, how much time they spend on task, and how mastery levels evolve over time.
- Khanmigo can help teachers draft practice questions, exit tickets, and differentiated assignments based on current gaps in understanding.
This shifts teacher time away from manual data analysis and routine content creation toward higher value work such as targeted small group instruction and deep feedback.
c) Implementation at scale
The partnership has run for several years and includes:
- Around 8 000 students in grades 3 to 8 tracked in a three year longitudinal study.
- More than 40 districts and tens of thousands of students and teachers participating in Khanmigo pilots more broadly, which has informed product refinement and safety controls.
Bill Gates visited one of the Newark schools using Khanmigo and described classrooms where AI is embedded in everyday practice while teachers remain central, using the tool to see where students are struggling and to prompt deeper thinking rather than replacing instruction.
4. Impact
Newark and Khan Academy commissioned a three year independent efficacy study focused on math performance:
- Students who became Yearly Proficient Learners on Khan Academy, defined as mastering at least 60 additional skills per year, gained on average plus 6 points on the New Jersey Student Learning Assessment (NJSLA) math exam, compared with the state average gain of plus 2 points over the same period. In other words, they improved at roughly three times the state average rate.
- The analysis suggests that for every 10 additional math skills mastered on Khan Academy, students gained approximately one additional point on the state exam.
These gains are not limited to high performing students. The study indicates improvements across a broad range of learners when they engage consistently with the platform.
Beyond test scores, qualitative reports from teachers and external observers highlight several additional effects:
- Teachers report better visibility into individual learning gaps and more time for higher order teaching activities, because routine differentiation and practice are handled by the AI supported system.
- Students show higher engagement and persistence, partly because AI tutoring gives immediate feedback and allows them to work at an appropriate difficulty level.
From a systems perspective, the Newark case illustrates how AI can:
- Operate as an infrastructure layer that supports both teaching and learning, rather than a standalone gadget.
- Provide measurable learning gains at district scale, validated through multi year outcome data.
- Support equity goals by giving all students access to a responsive tutor like experience, irrespective of family income or access to private support.
For education bodies, this case shows that when AI is introduced with clear pedagogy, strong data safeguards, and teacher centric design, it can move from experiment to durable part of the learning ecosystem, with results that are visible in both standardized outcomes and classroom practice.
Case Study: Lemonade and the Rise of AI Native Insurance
1. Context
For most of the twentieth century, property and casualty insurance followed the same pattern: paper forms, phone calls, long underwriting cycles, and back-office staff doing manual work at every step. This structure made it expensive to sell small policies, such as renters insurance for young people living in cities.
Lemonade was founded in 2015 as a fully digital insurer built around artificial intelligence and behavioral economics from day one. It started with renters and homeowners insurance in the United States and later expanded into pet, life, and auto lines.
From the start, Lemonade designed its operating model around AI chatbots rather than human agents. Customers buy cover through a conversational interface and manage their policy directly in the app. That shift turned insurance from a broker-centric process into a self-service digital product.
2. The Problem
Lemonade set out to solve several structural problems in traditional insurance:
-
High distribution and servicing costs
Selling and servicing policies through agents and call centres required significant human labour. This made low-premium products such as renters insurance unattractive for incumbents, even though demand existed among younger, digital-native customers.
-
Slow, fragmented customer journeys
Getting insured often required multiple forms, phone calls, and delays while underwriters assessed risk and produced quotes. This experience felt completely out of step with digital services in banking, transport, or retail.
-
Static, coarse risk models
Many insurers still relied on relatively simple rating factors and legacy systems. That limited their ability to segment risk finely, price dynamically, and adapt quickly as they collected more data.
-
Trust and alignment issues
Traditional insurers keep whatever premium is left after paying claims and costs. Lemonade’s leadership argued that this structure can create mistrust at the margin, since customers may feel their insurer gains when claims are denied.
Lemonade needed a way to sell and service a large volume of small policies at very low marginal cost, while also improving risk selection and rebuilding trust.
3. The AI Solution
Lemonade built an AI native architecture around two flagship bots and a set of machine learning models that run across underwriting, pricing, and customer experience.
3.1 AI onboarding and underwriting: “Maya”
Customers interact first with Maya, a conversational chatbot on the website and mobile app. Maya asks a structured sequence of questions, collects relevant risk information, and generates a personalised quote in real time.
Under the surface, Maya relies on machine learning models that:
- Segment customers into risk groups using a wide range of behavioural and external data.
- Estimate expected loss for each segment and map that to an appropriate premium.
- Continuously retrain as more policies and claims flow through the system.
Public analyses and early coverage note that Lemonade can typically issue a policy in about 90 seconds for renters and homeowners, because the AI handles questions, risk scoring, and document generation in a single flow.
This AI driven onboarding allows Lemonade to profitably serve low-ticket policies that would be uneconomical in a human-only model.
3.2 Behavioural economics built into the AI flows
Lemonade also embeds ideas from behavioral economics into its digital flows. For example:
- It uses a flat fee on top of premiums and donates leftover funds to charities chosen by customers through its “Giveback” program.
- The app and chatbot design use subtle prompts that encourage honesty and make customers feel aligned with the insurer’s incentives.
These choices are implemented in the AI powered interfaces. The models not only assess risk but are wrapped in a user experience that aims to reduce adversarial behaviour and improve data quality at input, which in turn improves model performance.
3.3 AI enhanced pricing and telematics
As Lemonade expanded into auto, it integrated telematics data directly into its AI pipeline. By linking driving behaviour (speed, braking patterns, time of day, and so on) with claims experience, its models can price auto policies more accurately and reward safer driving. A 2025 case study notes that integrating telematics at onboarding improved auto conversion by about 60 percent and materially improved pricing accuracy.
This approach reduces reliance on static variables such as age and postcode, and moves pricing toward personalised, behaviour-based risk scoring.
3.4 AI throughout operations
Although your focus is not claims or fraud, it is worth noting that the same AI stack runs through the rest of Lemonade’s operations:
- AI bots handle a large share of customer interactions without human involvement. One analysis reports around 30 percent of interactions are fully automated.
- Internal documents from Lemonade’s SEC filing state that the claims bot “AI Jim” handled the first notice of loss for 96 percent of claims as of early 2020 and resolved about one third of them end to end.
This end-to-end AI orchestration is what allows the company to scale with far fewer traditional back-office staff than a comparable insurer.
4. Impact
The combined effect of AI powered onboarding, pricing, and operations has been significant, both for Lemonade and for the wider industry.
4.1 Customer experience and adoption
- Lemonade reports that customers can get a policy in roughly 90 seconds and, in extreme cases, have had claims resolved in a few seconds, something that would have been unthinkable in the traditional model.
- The company’s mobile app has an average rating of around 4.9 in major app stores, which multiple sources attribute to the simplicity of the AI guided experience.
- An independent case study notes very high loyalty metrics, including renewal intent above 90 percent and similarly high referral intent among customers.
These indicators suggest that an AI first model can improve both speed and perceived fairness when it is designed with user experience in mind.
4.2 Growth and economics
- By March 2025 Lemonade had surpassed 1 billion US dollars in in-force premium, only about eight and a half years after writing its first policy. This corresponds to a compound annual growth rate of roughly 150 percent over that period.
- A 2025 analysis of its second quarter results highlights 29 percent year-on-year growth in in-force premium to 1.08 billion, with 2.69 million customers. It also notes that improvements in AI driven underwriting and operations helped reduce the gross loss ratio by 12 percentage points to 67 percent.
- Another business review emphasises that Lemonade’s AI enabled digital operations have helped reduce overhead costs per policy and support rapid revenue growth, even while the company continues to invest heavily in technology and expansion.
In simple terms, the AI stack did not just make the experience smoother. It altered the cost structure and allowed the company to grow quickly in lines that were previously unattractive to incumbents.
4.3 Influence on the wider insurance sector
Lemonade’s approach has pushed traditional insurers and other insurtechs to accelerate their own AI programs:
- Industry overviews now routinely cite Lemonade as a leading example of AI driven insurance, particularly for its combination of chatbots, machine learning underwriting, and behavioural design.
- Large carriers such as Allianz and others publicly describe AI as central to their strategies, not only for back-office efficiency but also for more proactive, preventive services, partly in response to competitive pressure from digital entrants.
The case shows how an AI native insurer can force an entire sector to rethink how products are priced, sold, and serviced.
5. Why this case matters
This case study illustrates that AI in insurance is not limited to claims automation or fraud detection. In Lemonade’s model, AI:
- Enables profitable micro-policies through extremely low marginal servicing cost.
- Supports granular, behavioural pricing that becomes more accurate as data accumulates.
- Allows a small technology focused team to compete with very large incumbents.
- Helps rebuild elements of trust through transparent digital flows and aligned incentives.
For regulators, educators, and executives, the lesson is that AI can reshape the economics, customer experience, and risk management core of insurance, not just the back office.
Reflection: What These Stories Really Show
You have just walked through concrete examples of AI in healthcare, finance, real estate, marketing, insurance and education. None of them were science fiction. All of them involved real institutions, real constraints, and real results.
Stepping back, several themes repeat across every case.
1. AI changes the shape of work, not the purpose of it
- In healthcare, AI did not replace doctors. It gave clinicians earlier signals, clearer images, and better risk scores so that human judgment could arrive sooner and with more confidence.
- In finance and insurance, AI did not replace the need for trust. It processed oceans of data so that risk decisions could be faster, more consistent, and more transparent.
- In education, AI did not replace teachers. It created a scaffolding of practice, feedback, and differentiation so that human teaching could focus on explanation, encouragement, and challenge.
In every case, AI moved human experts away from repetitive, low value work and toward roles where meaning, ethics and creativity sit at the center.
2. Results come from systems, not gadgets
None of the case studies succeeded because someone added a single clever tool.
They worked because leaders:
- Clarified the problem.
- Integrated AI into existing processes.
- Created feedback loops between humans and machines.
- Measured impact and kept improving.
Watson in medicine, Khanmigo in Newark, Lemonade in insurance, AI driven pricing in real estate, generative creativity at Coca Cola: all of these sit inside broader systems of policy, data, workflow and accountability.
The lesson is simple. AI is powerful, but it does its best work when it is part of a designed system, not a loose add on.
3. Human responsibility becomes more important, not less
As AI becomes more capable:
- Ethical questions become sharper, not softer.
- Data quality matters more, not less.
- Governance, transparency, and explainability move from the margins into the core of strategy.
Someone still has to decide which data is used, which outcomes are acceptable, which trade offs are justified, and where the line is between help and intrusion. That someone is not the model. It is the human team behind it.
The professionals who thrive will be the ones who can ask better questions of AI systems, interpret their outputs critically, and connect those outputs to human values, legal frameworks, and institutional missions.
What this means for you
You have now seen AI:
- Spot diseases earlier than humans alone.
- Support large districts in closing learning gaps.
- Let small insurers compete with giants.
- Turn passive audiences into creative collaborators.
- Predict risk more accurately than static models.
It is reasonable to feel two things at once:
- Excitement about what is possible.
- Anxiety about where you fit into all this.
Both reactions are honest. The purpose of this module is not to convince you that everything will automatically be fine. It is to show you that there is a clear path forward for people who choose to understand and work with these systems.
If you:
- Understand the basics of how AI learns.
- Can see where Automation, AI, Machine Learning, and Deep Learning sit in a workflow.
- Know the strengths and limits of each.
- Care about ethics, governance, and human outcomes.
Then you are not standing on the outside of this transformation. You are exactly the kind of person modern organisations need.
A message to you from the Cyrenza perspective
Cyrenza approaches Artificial Intelligence as a way to amplify human intent. The purpose is to give you more reach, more clarity, and more speed in the work you already care about, not to remove your contribution. When you think about AI in this programme, think of it as an additional capability you can direct, just as you would direct a team or a set of tools.
The mindset we invite you to adopt is simple. AI is a source of leverage that you can learn to guide. It can extend your thinking, support your analysis, and carry more of the repetitive load, while you remain responsible for judgment, priorities, and outcomes. Teams that understand where to integrate AI into their daily work will steadily gain an advantage over teams that ignore it or relate to it only through fear.
You are not expected to become a research scientist or an engineer. You are expected to become literate. That means understanding how AI systems behave in broad terms, recognising where they tend to be strong and where they often fail, and learning how to combine their abilities with your own domain expertise. This is professional literacy for the coming decade in the same way that spreadsheet literacy was essential in the last one.
With that level of literacy, you gain three important advantages in any role. You make better decisions, because you can combine your judgment with large scale analysis instead of relying only on limited personal experience. You move faster, because routine drafting, checking, and summarising can be handed to systems while you focus on strategy, design, negotiation, and leadership. You also become more valuable to your organisation, because you know how to coordinate human skills and AI capabilities into one coherent way of working.
The job market is already shifting, and it will continue to do so. Some tasks will fade away. New roles will emerge around orchestration, oversight, design of workflows, and responsible use of intelligent systems. In that environment, you can either hold tightly to older patterns of work and hope change arrives slowly, or you can learn how these new tools operate and position yourself as the person who can explain them, direct them, and improve them for others.
This training is designed to prepare you for the second path. If you remember only one idea from this module, let it be this: your relationship with AI will be a skill, and that skill will influence your professional outcomes. Cyrenza exists to make that relationship practical by giving you concepts, examples, and tools you can use. The more deeply you understand the principles covered so far, the more effectively you will be able to convert collaboration with AI into real performance gains for your team, your organisation, and your own career.
In the next stages of the curriculum we will move from understanding into application. You will work with concrete scenarios, learn how to design effective prompts, and see how to deploy AI agents inside real workflows in a measured, safe way. You do not need to approach this with certainty or fearlessness. Curiosity, care, and a willingness to keep learning are enough to begin.