The Motorola
Solutions Podcast
What is mission-critical AI, and how is it shaping our future? Join Motorola Solutions executive vice president and chief technology officer Dr. Mahesh Saptharishi as he and AI experts explore the science, the challenges and the incredible potential of AI when it matters most.

April 7, 2026
Ep. 6 Part 4: Trust, Transparency and the Human–AI Partnership with Dr. Harriet Rowthorn
In the final installment of our series, cognitive psychologist Dr. Harriet Rowthorn joins Mahesh to tackle one of the most complex hurdles in artificial intelligence: building and calibrating human trust.
Dr. Harriet Rowthorn, Behavioural Scientist
In the final installment of our series, cognitive psychologist Dr. Harriet Rowthorn joins Mahesh to tackle one of the most complex hurdles in artificial intelligence: building and calibrating human trust. We explore why common transparency tools, like confidence scores, often fail to improve human judgment and how trust is built through repeated, personalized interaction over time. Dr. Rowthorn delves into the "black box" of AI, the ethical necessity for transparency and why designing systems that reward quality and scrutiny over mere speed is critical — especially in high-stakes environments like public safety.
Show Notes:
- The Problem with Confidence Scores: Mahesh and Dr. Rowthorn discuss how humans often misinterpret AI confidence metrics, sometimes viewing scores as low as 50% as "virtual certainty."
- How Humans Build Trust: The conversation explains that trust isn't a baseline; it is built through repeated evidence and is highly context-dependent based on perceived intent.
- The Necessity of Transparency: Addressing AI as a "black box," Dr. Rowthorn emphasizes that transparency must be understandable for laypeople to satisfy ethical and safety obligations.
- Personalization and Interaction Schemas: The benefits of interacting with a single, personalized AI entity that remembers past interactions to help users build a more accurate "interaction schema."
- Incentives: Quality vs. Quantity: A look at why software should be designed to reward human scrutiny and "disconfirming evidence" rather than just the fastest path to an answer.
- Dr. Rowthorn’s Personal AI Usage: Harriet shares her experience using AI as a "wise sage" and thinking partner to extend her cognitive reach.
March 4, 2026
Ep. 6 Part 2: Designing Human–AI Interaction by Understanding Human Memory with Dr. Harriet Rowthorn
In the second part of our series, cognitive psychologist Dr. Harriet Rowthorn returns to move from the foundations of memory into...
Dr. Harriet Rowthorn, Behavioural Scientist
In the second part of our series, cognitive psychologist Dr. Harriet Rowthorn returns to move from the foundations of memory into a more specific set of questions: what happens when AI enters the loop of human recall? We explore why AI can be so dangerous when memory is already fragile — examining how plausible errors, confidence and "processing fluency" can make inaccurate information feel trustworthy — and why this is such a sticky problem. We also look at the other side of the equation: whether AI might be designed not to replace human recall, but to support it more carefully — less as an answer machine and more as a tool that helps people surface, test and articulate what they actually remember.
Show Notes:
-
The "Perfect Storm" for Memory Contamination: Dr. Rowthorn explains that AI presents a unique risk to memory because its errors are often plausible and interwoven with correct information. Humans suffer from "inattentional blindness," making it difficult to spot these embedded inaccuracies.
-
Processing Fluency as a Trap: AI outputs are designed to be easy to read, creating high "processing fluency". Our brains unconsciously use this ease of processing as a metacognitive marker that the information is true, safe, or trustworthy.
-
The Danger of AI Confidence: AI delivers information with high conviction and assertiveness. Without a critical eye, users often mistake this confident delivery for factual accuracy.
-
AI as a Cognitive Interviewer: Instead of providing "facts," AI could be trained to facilitate accurate memory recall by asking the right questions - grounded in an established cognitive science protocol known as the “cognitive interview.” This would flip the role of AI from an "answer machine" to a tool that helps extract what a human actually perceived.
-
Fighting the "Easy Button": Humans are wired to take the path of least resistance. Dr. Rowthorn argues that while pressing a button for an instant summary is easy, it removes the essential act of "writing as thinking," which has significant implications for system design in high-stakes environments.
-
Design Interventions for Skepticism: Current AI warnings (like small text banners) are often ineffective. Dr. Rowthorn suggests more salient design features, such as highlighting specific areas where the AI has low confidence, to encourage user scrutiny at the right time.
-
The Power of Personalization and Trust: For AI to effectively support human behavior change, it needs to build trust and rapport. By learning an individual's specific strengths and weaknesses over time, AI can provide hyper-personalized feedback that motivates users to take the more difficult, but more rewarding, path of deeper thinking.
March 24, 2026
Ep. 6 Part 1: Human Memory and Recollection of Information with Dr. Hariett Rowthorn
In this first part of a four-part series, cognitive psychologist and behavioral scientist Dr. Harriet Rowthorn joins the show to discuss the...
Dr. Harriet Rowthorn, Behavioural Scientist
In this first part of a four-part series, cognitive psychologist and behavioral scientist Dr. Harriet Rowthorn joins the show to discuss the fundamental nature of human memory. Moving away from the common "video recording" analogy, Hattie explains how memory is actually a reconstructive, forward-facing system shaped by attention, emotion and mental frameworks. The conversation explores how these psychological insights are critical for high-stakes environments like the criminal justice system and the design of AI.
Show Notes:
- The Scientist's View of Memory: Why memory is not a retrospective "YouTube video" of the past, but a prospective tool evolved to help us make better decisions for the future.
- Memory as a Reconstructive System: Understanding how memory is built from "training data" (past experiences) to predict new outcomes, making it naturally fallible and malleable.
- The Role of Schemas: How our brains use mental frameworks or "narratives" to navigate the world efficiently, and why these same shortcuts can lead to significant memory distortions.
- Encoding & Consolidation: The process of moving information from short-term to long-term memory, requiring active attention and semantic processing.
- The Cognitive Interview: A deep dive into a specialized interviewing technique designed to maximize accurate recall through:
- Free Recall: Allowing the witness to speak without steer or prompting.
- Reverse Order Recall: Breaking the reliance on schemas by recounting events backward.
- Changing Perspectives: Viewing the event through a different person's eyes.
- Mental Reinstatement: Re-engaging all five senses to unlock memory cues.
-
Social Dimensions of Memory: How discussing events with others can either strengthen recall or lead to unintentional "memory conformity" and distortion.
January 13, 2026
Ep. 5 Part 2: Policy, Decision Making and AI with Professor Krzysztof Gajos
This episode continues our conversation with Professor Krzysztof Gajos, diving deep into the foundational challenges of integrating AI into...
Krzysztof Gajos
Lead of the Intelligent Interactive Systems Group at Harvard
Guest bio:
This episode continues our conversation with Professor Krzysztof Gajos, diving deep into the foundational challenges of integrating AI into high-stakes, mission-critical environments. We move beyond simple performance metrics to examine socio-technical systems—the complex interplay of technology, people, policies and liability that truly shapes outcomes in public safety and other critical fields.
Professor Gajos introduces the “Dynamite Problem” analogy to caution against treating powerful AI as a universal solution, arguing that responsible design must start with a deep understanding of the problem and the cognitive work of the people at the heart of a given socio-technical system.
The discussion centers on the profound question: what is a decision? We explore the tension between human autonomy and AI advice, highlighting why a collaborative workflow where AI truly supports rather than replaces human judgment is vital for ensuring accountability and for protecting the professional confidence and identity of first responders and other key personnel.
Show Notes
-
Socio-technical System: A system that comprises technology, people (stakeholders), policies, norms, procedures and incentives. Intervening with new technology requires understanding this entire complex system, and may involve changing not just the technology but also how people work and the laws/policies.
-
What is a Decision? Krzysztof Gajos explores this question through three frames:
-
Cognitive Process: The steps a person goes through to arrive at a decision (gathering info, processing it, evaluating options, confirming confidence).
-
Process the Decision is a Part of (Subject's Welfare): How the decision impacts the larger picture for the person affected by it (e.g., a patient getting better, an applicant's ability to appeal a wrong decision).
-
Welfare of the Decision Maker (Professional Identity): How the decision-making process fits in with the worker's professional identity, pride and what they want to achieve for their clients or workplace.
-
-
Accountability/Liability (in AI Context): The tension created when AI provides a decision recommendation, particularly in high-stakes fields like medicine. Doctors fear a risk of greater liability if they disagree with the AI's recommendation and the outcome is negative, which erodes their authority even if the system is advisory. This necessitates clarifying shared accountability between the human and the AI/vendor, or changing laws to protect the doctor's decision making.
-
AI as Amplification vs. Replacement/Automation: Gajos advocates for amplification, where AI helps people with the particularly difficult or overwhelming aspects of their job (like synthesizing large volumes of information, surfacing relationships or systematic evaluation of options) while leaving them in charge of the truly consequential parts of the work. This preserves autonomy and leads to better outcomes, as opposed to pure replacement or automation.
-
Dynamite Problem: An analogy for the risk of over-relying on a very powerful and versatile technology (like modern AI systems) and trying to "shoehorn them into every problem that we see," without first deeply understanding the specific problem and considering other, possibly better-suited, solutions.
-
Intervention Generated Inequalities: A concept from public health. It describes an intervention that, while making people better off on average, benefits those who are already better off more than those who were previously behind, thus increasing the social or economic gap between them.
December 16, 2025
Ep. 5 Part 1: The Fragile Science of Human-AI Teams with Professor Krzysztof Gajos
In the first of a two-part conversation, Mahesh welcomes Professor Krzysztof Gajos, lead of the Intelligent Interactive Systems Group at Harvard...
Krzysztof Gajos
Lead of the Intelligent Interactive Systems Group at Harvard
In the first of a two-part conversation, Mahesh welcomes Professor Krzysztof Gajos, lead of the Intelligent Interactive Systems Group at Harvard, to challenge the common assumption that human + AI is always better than either alone.
Professor Gajos takes us deep into the fascinating, messy problem space of human-AI collaboration, revealing these configurations to be inherently fragile and contingent. The discussion dissects how specific design failures—including over-reliance on incorrect advice, increased cognitive load, poorly conceived delegation models, and interface design—can undermine decision quality, de-skill users, and create perverse incentive structures that ultimately undermine the very goals of the systems themselves.
Across this wide-ranging conversation, Professor Gajos’ emphasises the need for worker-centric AI systems that prioritize human competence, learning, and autonomy over clamors for what are all too often superficial efficiency gains. Discover why thoughtful AI design must start with a deep understanding of the cognitive work people actually perform
Show Notes
-
Software Bloat: A phenomenon (observed around the early 2000s in software like Microsoft Office) where consumer software becomes so complex and feature-rich that people have trouble navigating it and use only a small subset of its overall capability.
-
Need for Cognition: A psychological concept that refers to an individual’s tendency to enjoy, seek out, or feel motivated by effortful cognitive tasks. The cited study found that people with a high need for cognition were more likely to use AI-generated shortcuts than those with a lower need for cognition.
-
Intervention Generated Inequalities: An unintended consequence where an intelligent user interface, which appears to make people on average more efficient, may increase the gap between users by providing a greater benefit to those who are already more successful (e.g., people with high need for cognition).
-
Cognitive Forcing: An intervention technique, often used in medical decision-making literature, that interrupts a person's decision-making process to nudge them toward more analytical, less heuristic thinking. In AI, this was explored by confronting a person's decision with the AI's opposing view and reasons.
-
Worker-centric AIs: A proposed goal for AI design that focuses on supporting things important to the person doing the work, such as their sense of competence (supporting learning on the job) and autonomy, as opposed to solely focusing on decision accuracy or efficiency.
-
Delegation Model (vs. Partnership Model): The discussion points out that the primary goal of much AI assistance today is efficiency, which constitutes a delegation model. This model "comes with a lot of unintended consequences," including the risk of de-skilling, unlike a potential partnership model that incorporates more domain understanding and better cognitive engagement.
October 15, 2025
Ep. 4: Empathy, AI and the Future of Design with Professor James Landay
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true..."
James Landay
Professor of Computer Science at Stanford University, Co-Director Stanford Institute for Human-Centered AI (HAI)
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true. And it's almost a mistake to think of machines as operating only by themselves."
In this episode, Mahesh speaks with Professor James Landay, a leading expert in human-computer interaction and co-founder of the Human-Centered AI Institute at Stanford. This episode explores the complexities of designing AI systems for high-stakes environments like public safety and enterprise security, a core focus for Motorola Solutions.
Professor Landay introduces the principles of human-centered AI, emphasizing its power to augment human capabilities rather than replace them. Discover how “human-AI collaboration can lead to superintelligence faster than AI acting alone.” The conversation also delves into the crucial shift from user-centered design to “community-centered and society-centered design,” acknowledging AI's broader impact beyond the immediate user.
Finally, Professor Landay shares invaluable advice for today's developers, underscoring the responsibility of “managing AI's benefits and harms through better design ethics and intentional collaboration,” particularly relevant for those building solutions for the safety and security of communities.
Show Notes
-
Professor Landay advocates for the discipline of Human-Computer Interaction (HCI) as at the intersection of art and science. Seen in this way, the “art” of creative thinking and mobilising the imagination to think through complex human problems meets the evaluative rigour of scientific inquiry.
-
User-Centered Design (UCD): The traditional design approach that focuses on the direct user. Although still needed, there is a growing imperative within the field to go beyond this individuating focus and think more deeply about the ripple effects of design beyond a single user into their broader communal and social realities.
-
Community-Centered Design: A shift from UCD, necessary because AI applications often impact a broader community beyond the direct user. This approach involves broadening the design lens to engage the community—whoever is impacted by the system—in the design process, including interviewing, observing and testing.
-
Society-Centered Design: The highest level of design consideration for systems that become ubiquitous and have societal level impacts. Achieving this level often requires involving disciplines from the social sciences, humanities and the arts in AI development teams.
-
Human Centered AI (HCAI): A philosophy and research principle centered on shaping AI in alignment with human values. Its core principles include emphasizing design and ethics from the beginning, during and after development, and augmenting human potential rather than replacing or reducing it.
-
Augmenting humans rather than replacing them: A core principle of HCAI that advocates for designing AIs to be symbiotic with people. The goal is to let people do the pieces of a job they are best at and enjoy, while having machines handle the parts that are repetitive, tedious or better suited for machines.
-
Human-AI Collaboration (Teaming): The key to improved performance, where a joint human-AI system performs better than either the AI or the human alone. This collaboration needs to be personalized, meaning the AI adapts to the human's strengths, and the human adapts their usage to the AI partner.
-
Superintelligence (through collaboration): The idea that intelligence has always been a collective, socially distributed phenomenon. As Landay puts it, every apparent leap of “super intelligence” - like putting a man on the moon - has been an emergent property of human cooperation. Extending that logic, if artificial superintelligence ever does emerge, it’s unlikely to appear as a sudden, independent breakthrough. Instead, it will arise through the evolving collaboration between humans and AI systems - as a product of our shared sociotechnical networks, not a replacement for them.
-
AI Time: A metaphor used to describe the speed of progress in the AI industry, suggesting that AI time is moving at 10X—meaning five or ten normal Earth years happen in only one year in AI time.
August 26, 2025
Ep. 3 Part 2: Designing for Trust: Power, Policing and AI with Professor Martin Holbraad
“A police report isn't merely a record; it can be a memory aid, a legal artifact, a shield or even a performance.”
Martin Holbraad
Leading Anthropologist and Director of the Ethnographic Insights Lab at University College, London, United Kingdon
“A police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance.”
Join us as Mahesh welcomes back Professor Martin Holbraad, a leading anthropologist and director of the Ethnographic Insights Lab at University College London. This episode delves into the idea that anthropology offers more than cultural interpretation; it provides a radically different way of thinking about systems. Using frameworks like actor-network theory, you’re invited to rethink agency, not as something humans possess, but as something co-produced in the relationships between tools, practices, people and policies. This fundamentally changes how AI is understood, not as a "ghost in the machine," but as an active participant in dynamic, shifting networks where meaning, power and responsibility are constantly negotiated.
As Mahesh and Martin discuss, a police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance. These overlapping realities that exist within the same system are never neutral; they are shaped by power, pressure and purpose.
For those who seek to be on the cutting edge of innovation, anthropology reminds us that imagination is a method. Every system encodes assumptions about the world, and at Motorola Solutions, we believe those assumptions can always be questioned, rethought and reimagined to better serve the mission-critical needs of public safety professionals around the world.
Show Notes
-
Agency: In the context of actor-network theory, agency is not limited to humans but is distributed across the entire network of people and things. The term "actant" is used instead of "actor" to acknowledge that non-human elements can also have agency.
-
Symmetry: A principle in actor-network theory that suggests treating modern or "Western" societies with the same frameworks used to study other societies, which are often labeled as "non-modern" or "non-Western". Bruno Latour was very keen on this concept, which challenges the idea of "purifying" the world into distinct categories like nature and culture or things versus people.
-
Ontological Multiplicity: The idea that things are not just one thing but can have multiple, different, and even ambiguous meanings or constitutions across different social situations, spaces, and times. For example, an incident report can be a record of a memory, a performance of professional competence, a legal artifact and a shield. This concept suggests that a system can contain "overlapping realities".
-
Abduction: A form of reasoning that is neither deductive nor purely inductive. It involves coming up with the "best understanding given the facts" and making constant, reciprocal adjustments. Abduction is a way to navigate a dynamic system that is constantly evolving and unpredictable.
August 11, 2025
Ep. 3 Part 1: Anthropology's Key to Robust AI with Professor Martin Holbraad
In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic...
Martin Holbraad
Leading Anthropologist and Director of the Ethnographic Insights Lab at University College, London, United Kingdom
“In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep ethnographic research – unveils a powerful truth: to build truly effective AI, you must first understand the very fabric of human knowledge. This is more than just studying cultures; it’s about redefining what it means to design for a world where "AI changes the meaning of reports themselves" and the line between "where the machine ends and the person starts" blurs. This first part of a two-episode series plunges into the fundamental questions of human memory and how anthropological thinking can conquer the complex challenges of AI, as seen in Motorola Solutions and UCL’s groundbreaking work on police incident reporting. Discover why understanding "what kind of world you're building for" isn't just an essential first step, it’s the only step to crafting tools that truly serve humanity.
Martin Holbraad is Professor of Social Anthropology at University College London (UCL). He has conducted anthropological research in Cuba since the late 1990s, on the intersection of politics and ritual practices, producing works including Truth in Motion: The Recursive Anthropology of Cuban Divination (Chicago, 2012) and Shapes in Revolution: The Political Morphology of Cuban Life (Cambridge, 2025). He has made significant contributions to anthropological theory, including in his co-authored volume The Ontological Turn: An Anthropological Exposition (Cambridge, 2016). He is Director of the Ethnographic Insights Lab (EI-Lab), which he founded at UCL in 2020 as a consultancy dedicated to helping organizations better understand their customers and users, as well as themselves. EI-Lab’s tagline is “the problem is you don’t know what the problem is.”
Show Notes
-
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
-
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and determine outcomes. It is a constructivist approach arguing that society, organizations, ideas and other key elements are shaped by the interactions between actors in diverse networks rather than having inherent fixed structures or meanings.
-
Camera conformity: When officers review body-worn camera footage before writing reports, they may unconsciously adjust their accounts to match what’s on video, omitting details they personally recall but aren’t visible in the footage.
-
Memory contamination: Exposure to AI-generated or external content can introduce errors into an officer’s memory, causing them to unintentionally overwrite or alter their own recollections with inaccurate information.
-
Cognitive offloading: Relying on AI to generate reports or recall details can reduce the need for officers to actively use their own memory, potentially weakening recall when they most need it.
-
Incident Reporting Tools and Police Officers: A prime example illustrating ontological multiplicity involves police incident reports. While developers might assume a report is solely a "record as faithful as possible of the officer's subjective recall," ethnographic research revealed that for officers, it is also a "performance of [their] professional competence," designed to convince a jury, promotions panel, or complaints panel of their effectiveness. This demonstrates how one "thing" (an incident report) can be ontologically multiple, serving different purposes simultaneously.
June 30, 2025
Ep. 2: Safer Retail Experiences through AI with Dr. Read Hayes
In the second episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss Prevention Research...
Read Hayes
Research Scientist, University of Florida & Loss Prevention Research Council (LPRC)
In the second episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss Prevention Research Council (LPRC), as they explore the evolution of loss prevention, emphasizing the importance of prevention over response in public safety. They discuss the integration of technology, such as AI and body-worn cameras, in enhancing crime detection and prevention. The dialogue also highlights the significance of collaboration between retailers and law enforcement, the challenges of data sharing and the behavioral cues that can indicate potential criminal activity. Mahesh and Dr. Hayes also discuss insights into future trends in crime prevention and the role of technology in shaping these developments.
Read Hayes, PhD is a Research Scientist and Criminologist at the University of Florida, and Director of the LPRC. The LPRC includes 100 major retail corporations, multiple law enforcement agencies, trade associations and more than 170 protective solution/tech partners working together year-round in the field, in VR and in simulation labs with scientists and practitioners to increase people and place safety by reducing theft, fraud and violence. Dr. Hayes has authored four books and more than 320 journal and trade articles. Learn more about the LPRC and its extensive research: https://lpresearch.org/research/
Show notes:
-
Loss Prevention Research Council (LPRC): The Loss Prevention Research Council is an active community of researchers, retailers, solution partners, manufacturers, law enforcement professionals, and others who believe research and collaboration will lead to a safer world for shoppers and businesses.
-
Public and Private Collaboration/Partnerships: The critical need for law enforcement and private enterprises (like retailers) to work together to address crime, especially in real-time information sharing.
-
Real-time Crime Integrations/Pre-crime Interventions: The goal of achieving immediate data exchange and proactive measures before and during a crime event, contrasting with traditional forensic, after-the-fact investigations.
-
The "Affect, Connect, Detect" Model: This core framework leverages the scientific method to understand and counter criminal activity.
-
Affect: This involves understanding the "initiation and progression" of a crime, similar to a medical pathology, and figuring out how to impact that progression to make it harder, riskier, or less rewarding for offenders.
-
Detect: The goal is earlier detection of criminal intent or activity. This is achieved by arraying sensors (digital, aural, visual, textual) to pick up indicators before, during and after a crime, such as online bragging or coordinating activities.
-
Connect: This emphasizes information sharing and collaboration. It involves three levels: Connect1 (smart and connected places, enhancing a place manager's awareness), Connect2 (smart connected enterprises, sharing information between stores, e.g., "hot lists"), and Connect3 (smart connected communities, partnering with law enforcement and other organizations beyond the enterprise).
June 30, 2025
Ep. 1: Bold, Responsible AI with Brigadier General (ret.) Patrick Huston
In the debut episode of Mahesh the Geek, Motorola Solutions EVP & CTO Mahesh Saptharishi is joined by Brigadier General...
Brigadier General (ret.) Patrick Huston General Counsel / Army General (ret.) / Board Member
In the debut episode of Mahesh the Geek, Mahesh is joined by Brigadier General (ret.) Patrick Huston, who shares his unique journey from military service to becoming a legal expert in AI and cybersecurity. Mahesh and Patrick delve into topics such as: Human-machine teaming, regulatory challenges, AI's impact on intellectual property law and fair use, the need for AI risk management standards, cyber risks, open source LLM security and licensing concerns, AI's transformation of work through automation and the potential for escalating errors with AI agents. General Huston’s insights are grounded in his message for those considering leveraging AI – be bold, be responsible and be flexible.
General Huston is an engineer, a soldier, a helicopter pilot, a lawyer, a technologist and a corporate board member. He’s a renowned strategist, speaker and author on AI, cybersecurity and quantum computing. He is a Certified Director in the National Association of Corporate Directors, he’s on the FBI’s Scientific Working Group on AI, on the American Bar Association's (ABA) AI Task Force, and on the Association of Corporate Counsel’s (ACC) Cybersecurity Board.
Show Notes
- Human and Machine Teaming / Human Augmentation: This concept involves combining the respective strengths of humans and machines. Humans excel at leadership, common sense, empathy and humor. Machines outperform humans in tasks like ingesting mass data, rapid data computations, or handling repetitive tasks where human attention wanes. The goal is not choosing between humans or machines, but leveraging the best of both worlds.
- Key AI Principles: Fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability. These principles are generally universal, but their application and implementation vary significantly by country or region, as seen with Europe's stricter data privacy rules versus the United States' patchwork of state and local laws.
- General Huston’s Advice for Adopting AI Applications:
- Be bold: Leverage AI to remain competitive.
- Be responsible: Understand and actively mitigate risks; AI is not a magic solution.
- Be flexible: Be ready to pivot, adapt, and fine-tune your approach as some things will work well and others won't.
May 1, 2025
Introducing Mahesh the Geek
Mahesh Saptharishi is the chief technology officer at Motorola Solutions, and a person obsessively interested in all things tech.
Mahesh Saptharishi is the chief technology officer at Motorola Solutions, and a person obsessively interested in all things tech.
A geek is one who is knowledgeable and obsessed, and this podcast is Mahesh’s attempt to seek knowledge. Mahesh the Geek will delve into the core of mission-critical AI – technology that safeguards lives, communities and essential daily services. In this series, we'll be exploring the science, the challenges and the incredible potential of AI when it matters most. During each episode, Mahesh will talk to the experts and ask them one crucial question: what is mission-critical AI, and how is it shaping our future?
Let's geek out.


