{"id":4061,"date":"2026-05-14T08:02:19","date_gmt":"2026-05-14T08:02:19","guid":{"rendered":"https:\/\/falcoxai.com\/main\/who-decides-ai-tells-campbell-brown\/"},"modified":"2026-05-14T08:02:19","modified_gmt":"2026-05-14T08:02:19","slug":"who-decides-ai-tells-campbell-brown","status":"publish","type":"post","link":"https:\/\/falcoxai.com\/main\/who-decides-ai-tells-campbell-brown\/","title":{"rendered":"Who Decides What AI Tells Campbell Brown? The AI Accuracy Gap"},"content":{"rendered":"<p>When Campbell Brown\u2019s Forum AI tested leading foundation models on high-stakes topics like geopolitics and mental health, it found AI judges agreeing with human experts at just 90% \u2014 a gap that could mean life-or-death decisions based on flawed information. You\u2019re not alone if you\u2019re wondering: who decides what AI tells you? The answer matters more than ever as models shape public opinion, hiring practices, and even personal well-being.<\/p>\n<p>This article reveals how Forum AI is tackling the AI accuracy gap by bringing in experts like former Secretary of State Tony Blinken and former House Speaker Kevin McCarthy to set benchmarks. You\u2019ll learn why accuracy isn\u2019t just a technical challenge \u2014 it\u2019s a business imperative \u2014 and how you can ensure AI delivers reliable insights, not noise.<\/p>\n<hr>\n<h2>The Accuracy Gap in AI-Driven News<\/h2>\n<p>AI is reshaping how we consume news, but accuracy is often sacrificed for speed and engagement. Platforms like Forum AI, founded by Campbell Brown, are revealing a troubling trend: AI models frequently fail to grasp the nuance of high-stakes topics like geopolitics and mental health. This isn\u2019t just a technical flaw \u2014 it\u2019s a systemic issue with real-world consequences.<\/p>\n<p>When AI pulls from unreliable sources, like Chinese Communist Party websites for unrelated stories, or displays political bias, the trust in information erodes. Brown notes that these errors are not just technical \u2014 they\u2019re ethical. The cost of inaccuracy isn\u2019t just misinformation; it\u2019s a loss of credibility, and in some cases, a threat to public understanding.<\/p>\n<p>Accuracy is overlooked because the metrics that drive AI development \u2014 like engagement and speed \u2014 don\u2019t align with the needs of informed decision-making. The result is a gap between what AI can do and what it should do \u2014 a gap that Forum AI is trying to close with human expertise and rigorous benchmarks.<\/p>\n<figure class=\"wp-post-image\"><img decoding=\"async\" src=\"https:\/\/falcoxai.com\/main\/wp-content\/uploads\/2026\/05\/who-decides-what-ai-tells-camp-inline-1.jpg\" alt=\"A chart shows the accuracy gap in AI-driven news, with the focus keyword \"decides AI tells Campbell Brown\" highlighted in the data comparison\" loading=\"lazy\" \/><figcaption>Photo by <a href=\"https:\/\/www.pexels.com\/@ono-kosuki\">Ono  Kosuki<\/a> on <a href=\"https:\/\/www.pexels.com\">Pexels<\/a><\/figcaption><\/figure>\n<hr>\n<h2>What Forum AI Is Doing to Fix the Problem<\/h2>\n<h3>How Forum AI evaluates AI models<\/h3>\n<p>Forum AI is tackling AI accuracy by evaluating models on high-stakes topics where nuance matters. Instead of relying on generic benchmarks, they use real-world scenarios in areas like geopolitics, mental health, and finance \u2014 domains where errors can have serious consequences.<\/p>\n<h3>The experts involved in the process<\/h3>\n<p>The process involves recruiting top experts, including former Secretary of State Tony Blinken and former House Speaker Kevin McCarthy, to help shape the benchmarks. These individuals bring deep, firsthand knowledge to ensure AI models are judged on real-world impact, not just technical performance.<\/p>\n<h3>Setting a 90% consensus threshold<\/h3>\n<p>Forum AI aims for AI judges to reach 90% consensus with human experts \u2014 a clear, measurable goal. This threshold ensures models are not just technically sound but also aligned with human judgment on complex issues. As Campbell Brown puts it, this is a step toward making AI more reliable and accountable.<\/p>\n<hr>\n<h2>The Challenges of Training AI on Complex Topics<\/h2>\n<h3>Why geopolitics and mental health are hard for AI<\/h3>\n<p>Geopolitics and mental health are among the most difficult subjects for AI to navigate. These areas are nuanced, context-dependent, and often lack clear-cut answers. As Campbell Brown explained, AI models like Gemini have shown a tendency to pull from biased or irrelevant sources, such as Chinese Communist Party websites, when covering topics unrelated to China.<\/p>\n<h3>Missing context and perspectives<\/h3>\n<p>AI frequently misses critical context and perspectives, leading to incomplete or misleading information. For example, models may present arguments without acknowledging opposing viewpoints or fail to recognize the broader implications of a geopolitical event. This lack of depth undermines trust and reliability, especially in high-stakes scenarios.<\/p>\n<h3>The importance of diverse evaluation<\/h3>\n<p>Forum AI is addressing these challenges by incorporating diverse human expertise into its evaluation process. By involving experts like former Secretary of State Tony Blinken and former House Speaker Kevin McCarthy, Forum AI ensures that AI models are judged against a broad range of perspectives. This approach helps bridge the AI accuracy gap and brings models closer to human-level understanding.<\/p>\n<figure class=\"wp-post-image\"><img decoding=\"async\" src=\"https:\/\/falcoxai.com\/main\/wp-content\/uploads\/2026\/05\/who-decides-what-ai-tells-camp-inline-2.jpg\" alt=\"A person discusses AI challenges with Campbell Brown as Forum AI evaluates complex topics with human input\" loading=\"lazy\" \/><figcaption>Photo by <a href=\"https:\/\/www.pexels.com\/@kampus\">Kampus Production<\/a> on <a href=\"https:\/\/www.pexels.com\">Pexels<\/a><\/figcaption><\/figure>\n<hr>\n<h2>Why Forum AI\u2019s Approach Outperforms the Status Quo<\/h2>\n<h3>Forum AI vs. current foundation models<\/h3>\n<p>Most foundation models lack a clear mechanism for accountability, often producing biased or inaccurate outputs on high-stakes topics. Forum AI addresses this by evaluating models through human-driven benchmarks, ensuring outputs align with expert judgment. For example, models like Gemini have been found to pull from Chinese Communist Party websites for unrelated stories, highlighting the gap between current AI and what\u2019s needed in real-world scenarios.<\/p>\n<h3>The role of human experts<\/h3>\n<p>Forum AI\u2019s methodology hinges on recruiting top experts in fields like geopolitics, mental health, and finance. These experts help create benchmarks and train AI judges, ensuring models are evaluated on nuanced, complex topics. By involving figures like Niall Ferguson and former Secretary of State Tony Blinken, Forum AI ensures outputs are not only technically sound but also ethically aligned.<\/p>\n<h3>Fixes that can be implemented now<\/h3>\n<p>Forum AI has identified simple, actionable fixes to improve AI accuracy, such as better context integration and bias correction. These changes can be implemented without overhauling entire systems, offering a practical path forward. As Campbell Brown notes, \u201cThere are some very easy fixes that would vastly improve the outcomes.\u201d This approach ensures AI delivers reliable, trustworthy information \u2014 a critical need for operations leaders and quality managers today.<\/p>\n<hr>\n<h2>How You Can Implement AI Accountability in Your Organization<\/h2>\n<h3>Evaluate AI models on relevant topics<\/h3>\n<p>Before deploying AI, test it on topics directly relevant to your operations. Campbell Brown\u2019s Forum AI evaluates models on high-stakes subjects like geopolitics and mental health, where accuracy matters most. Use this approach to identify gaps in your AI\u2019s understanding of your industry\u2019s nuances.<\/p>\n<h3>Involve domain experts in AI training<\/h3>\n<p>Domain experts can train AI to recognize context and avoid biases. Forum AI works with experts like former Secretary of State Tony Blinken to align AI with real-world expertise. This ensures your AI doesn\u2019t just repeat information\u2014it understands it.<\/p>\n<h3>Set clear benchmarks for accuracy<\/h3>\n<p>Define measurable accuracy targets, such as Forum AI\u2019s 90% consensus threshold with human experts. This creates accountability and ensures AI aligns with your quality and ethical standards. Without clear benchmarks, accuracy remains subjective and hard to track.<\/p>\n<hr>\n<div class=\"wp-cta-block\">\n<p><strong>Ready to find AI opportunities in your business?<\/strong><br \/>\nBook a <a href=\"https:\/\/falcoxai.com\">Free AI Opportunity Audit<\/a> \u2014 a 30-minute call where we map the highest-value automations in your operation.<\/p>\n<\/div>\n<hr>\n<h2>Common Misconceptions About AI Accuracy<\/h2>\n<h3>AI is not inherently accurate<\/h3>\n<p>Many believe AI produces reliable outputs by default, but this is a dangerous illusion. As Campbell Brown noted, leading models like Gemini have pulled content from Chinese Communist Party websites for unrelated topics, showing how easily AI can misrepresent facts. AI systems are only as accurate as the data they\u2019re trained on \u2014 and that data is often biased, incomplete, or misleading.<\/p>\n<h3>Accuracy is not a technical afterthought<\/h3>\n<p>Accuracy isn\u2019t a luxury; it\u2019s a core requirement for AI to be trusted in high-stakes scenarios. Forum AI\u2019s work with geopolitics, mental health, and hiring shows that accuracy must be prioritized from the start, not tacked on later. Models that fail to align with human expertise \u2014 like those showing left-leaning political bias \u2014 are not just flawed, they\u2019re risky.<\/p>\n<h3>AI can be held accountable<\/h3>\n<p>Accountability starts with governance. Forum AI\u2019s approach \u2014 using human experts to evaluate models \u2014 sets a benchmark for what\u2019s possible. It proves that AI can be held to a standard, and that accuracy is achievable with the right framework. This is not just about ethics; it\u2019s about making AI a tool that works for your business, not against it.<\/p>\n<hr>\n<h2>The Future of AI Governance and Accountability<\/h2>\n<h3>The need for AI governance frameworks<\/h3>\n<p>The future of AI hinges on robust governance frameworks that ensure models deliver accurate, ethical, and actionable insights. Without such structures, the risks of misinformation, bias, and operational misalignment grow exponentially.<\/p>\n<p>Forum AI\u2019s approach \u2014 leveraging human expertise to benchmark and refine models \u2014 sets a clear standard. As Campbell Brown notes, the inclusion of figures like former Secretary of State Tony Blinken and former House Speaker Kevin McCarthy in evaluating AI performance on high-stakes topics is not just symbolic; it&#8217;s a practical step toward accountability.<\/p>\n<p>Organizations must move beyond vague promises and adopt frameworks that measure AI accuracy against real-world outcomes. This includes defining clear benchmarks, auditing models regularly, and involving subject-matter experts in the evaluation process.<\/p>\n<p>Ignoring these steps risks repeating the failures of past platforms that prioritized engagement over accuracy. The lesson from Facebook\u2019s past is clear: without governance, AI can become a tool for misinformation, not empowerment.<\/p>\n<p class=\"wp-source-attribution\"><em>Source: <a href=\"https:\/\/techcrunch.com\/2026\/05\/13\/who-decides-what-ai-tells-you-campbell-brown-once-metas-news-chief-has-thoughts\/\" target=\"_blank\" rel=\"noopener noreferrer\">techcrunch.com<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>When Campbell Brown\u2019s Forum AI tested leading foundation models on high-stakes topics like geopolitics and mental health, it found AI judges agreeing with human experts at just 90% \u2014 a gap that could mean life-or-death decisions based on flawed information. You\u2019re not alone if you\u2019re wondering: who <\/p>\n","protected":false},"author":1,"featured_media":4058,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[96],"tags":[468,473,471,472,75,138,469,470],"class_list":["post-4061","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-news","tag-ai-accountability","tag-ai-accuracy","tag-ai-benchmarks","tag-ai-ethics","tag-ai-governance","tag-ai-news","tag-campbell-brown","tag-forum-ai"],"_links":{"self":[{"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/posts\/4061","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/comments?post=4061"}],"version-history":[{"count":0,"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/posts\/4061\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/media\/4058"}],"wp:attachment":[{"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/media?parent=4061"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/categories?post=4061"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/falcoxai.com\/main\/wp-json\/wp\/v2\/tags?post=4061"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}