{"id":6055,"date":"2025-03-31T08:58:00","date_gmt":"2025-03-31T00:58:00","guid":{"rendered":"https:\/\/blog.nexussup.com\/?p=6055"},"modified":"2025-04-08T10:34:12","modified_gmt":"2025-04-08T02:34:12","slug":"ai-brainstorm-unleashed","status":"publish","type":"post","link":"https:\/\/blog.nexussup.com\/?p=6055","title":{"rendered":"AI Brainstorm Unleashed"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Agent Challenges &amp; Industry Shifts \u2013 March 2025 Report<\/h2>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Introducing a New Format<\/h3>\n\n\n\n<p>In March 2025, our AI Monthly Report took on a fresh format by launching an in-person \u201cAI Brainstorm\u201d event. Centered around a hot AI topic every month, we invited academic researchers, industry practitioners (including R&amp;D and technical professionals), and investors for a closed-door roundtable discussion.<\/p>\n\n\n\n<p>On March 30, we held the very first AI Brainstorm with the theme <strong>Agent<\/strong>. Six guests\u2014from universities, Internet companies, and non-profit AI research institutions\u2014joined the discussion. In this month\u2019s report, we\u2019ve excerpted key parts of the conversation.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"845\" height=\"576\" src=\"https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/\u5fae\u4fe1\u622a\u56fe_20250408101523.jpg\" alt=\"\" class=\"wp-image-6056\" srcset=\"https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/\u5fae\u4fe1\u622a\u56fe_20250408101523.jpg 845w, https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/\u5fae\u4fe1\u622a\u56fe_20250408101523-300x204.jpg 300w, https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/\u5fae\u4fe1\u622a\u56fe_20250408101523-768x524.jpg 768w\" sizes=\"auto, (max-width: 845px) 100vw, 845px\" \/><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">In This Issue, You\u2019ll Discover:<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Why Developing Agents Is Easy, But Making Them Work Is Hard<\/strong><\/li>\n\n\n\n<li>Key takeaways from the \u201cShell vs. Model\u201d segment during the event<\/li>\n\n\n\n<li>The potential of large-model applications to become the internet\u2019s traffic gateways along with OpenAI\u2019s platform vision<\/li>\n\n\n\n<li>How divergent compute investments have kept Nvidia\u2019s stock in turmoil \u2013 Jensen Huang shared an intriguing inference story<\/li>\n\n\n\n<li>Active merger and acquisition deals with Nvidia leading the charge<\/li>\n\n\n\n<li>News that 31 AI companies have secured over $50 million in financing, with vertical AI applications capturing investor favor<\/li>\n\n\n\n<li>Six case studies using a \u201cmicroscope\u201d approach to trace the internal decision paths of large models<\/li>\n<\/ul>\n\n\n\n<p><em>Feel free to leave comments with any important trends we might have missed.<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Developing Agents: Easy to Build, Hard to Refine<\/h3>\n\n\n\n<p>Earlier in March, the general-purpose Agent product <strong>Manus<\/strong> was launched. Just one day later, two open-source versions\u2014<strong>OpenManus<\/strong> and <strong>OWL<\/strong>\u2014emerged. OpenManus was even replicated by a team of four in merely three hours.<\/p>\n\n\n\n<p>Such \u201cquick and dirty\u201d remakes suggest that building an Agent may not be difficult at first glance. However, practical experiences and system complexity show that making an Agent truly effective remains a significant challenge.<\/p>\n\n\n\n<p>After collaborating with dozens of teams, large-model companies like Anthropic now classify Agents into two categories [1]:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"has-black-color has-cyan-bluish-gray-background-color has-text-color has-background has-link-color wp-elements-046fd8c69fb555103e733cd4891605a3\"><strong>Workflows:<\/strong> Systems built with a pre-defined code path that coordinate large models with various tools.<\/li>\n\n\n\n<li class=\"has-black-color has-cyan-bluish-gray-background-color has-text-color has-background has-link-color wp-elements-193db6b06299c196ec4e002bb3b81935\"><strong>Agents:<\/strong> Systems where the large model autonomously chooses its processing steps and tool usage to complete tasks independently.<\/li>\n<\/ul>\n\n\n\n<p>The ease in Agent development is largely driven by a maturing foundation of underlying models, frameworks, and tool ecosystems. Companies like OpenAI and Anthropic offer model APIs, and open-source standards now exist to interface with browsers, file systems, search, and more. Anthropic\u2019s recently introduced Model Context Protocol (MCP) is being widely adopted to standardize how Agents connect with external tools\u2014OpenAI has joined this initiative.<\/p>\n\n\n\n<p>Despite these advancements\u2014whether it\u2019s Manus and its various open-source derivatives or OpenAI\u2019s Deep Research Agent\u2014problems continue to arise:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>Large Model Limitations:<\/strong><br>Models still suffer from severe hallucinations, logic leaps, challenges in processing long texts, and outdated training data. Even with RAG (Retrieval-Augmented Generation) as a fallback, these issues may introduce further errors.<\/li>\n\n\n\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>System Design Challenges:<\/strong><br>Precisely guiding model behavior is hard. When executing complex tasks, models can fall into infinite loops, and error accumulation worsens with longer task chains. Furthermore, the limited publicly available information is designed for human interaction rather than for large models.<\/li>\n<\/ul>\n\n\n\n<p>Anthropic has offered several suggestions for tool-level improvements [1]:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Think from the model\u2019s perspective\u2014a good tool definition typically includes usage examples, edge cases, and clear input formatting that sets it apart from others.<\/li>\n\n\n\n<li>Continuously test how the model uses the tool and iterate by learning from its mistakes.<\/li>\n\n\n\n<li>Implement \u201cpoka-yoke\u201d (foolproofing) by tweaking parameter settings to lower the probability of model errors.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">\u201cThe Model Is the Product\u201d<\/h3>\n\n\n\n<p>Alexander Doria from AI startup Pleias insists that \u201cthe model is the product\u201d [2]. He explains that OpenAI\u2019s Deep Research did not wrap the o3 model into a product but instead trained a new model through reinforcement learning to give it search capabilities\u2014rather than just calling external tools, adding prompt words, or chaining tasks.<\/p>\n\n\n\n<p>Currently, most Agent products are workflow-based. While these can add value in vertical scenarios, major breakthroughs require a complete model redesign. Focusing solely on application development is like \u201cusing a general from one war to fight another.\u201d<\/p>\n\n\n\n<p>During our AI Brainstorm, the discussion \u201cIs the Shell More Important Than the Model for an Agent?\u201d produced several key points:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>Third-Party vs. In-House Products:<\/strong><br>Anthropic\u2019s CPO Mike Krieger mentioned that understanding the distinction between \u201cin-house\u201d and \u201cthird-party\u201d products is insightful. For instance, <strong>Cursor<\/strong>\u2014a successful third-party product\u2014did not train its own large model but impressed with its interaction design, creating an immersive Agent experience that aligns seamlessly with human and production environments.<\/li>\n\n\n\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>The Shell Is a Starting Point:<\/strong><br>For an Agent, the shell is at least the initial layer. If your shell is forward-thinking, you can wait for the model to improve, and then your product evolves accordingly.<\/li>\n\n\n\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>OpenHands\u2019 Early Model:<\/strong><br>OpenHands began as a shell with plans for model training down the line. Integrating a commercial model API did not hinder user adoption. They believed that if the shell worked well enough for early users, that was sufficient for the time being. Notice how the steady improvements from Claude 3.5 to 3.7 have reinforced this view.<\/li>\n\n\n\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>Packaging as Input\/Output Transformation:<\/strong><br>The \u201cshell\u201d can be seen as a transformation layer that formats inputs and outputs for the base model. When one side becomes exceptionally strong, the other\u2019s influence on overall performance might lessen.<\/li>\n\n\n\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>MCP\u2019s Role:<\/strong><br>Anthropic\u2019s MCP further boosts the value of Agent products. This open ecosystem protocol lets any company wrap existing software so that large language models can call it as a tool.<\/li>\n\n\n\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>Iterative Model Improvements:<\/strong><br>Pre-training, fine-tuning, and Agent architecture optimization work hand-in-hand. Research on DeepSeek and other inference models shows that continuous iteration of the base model is crucial for inference performance\u2014this development stems from the shift from RL-based to LLM-based architectures.<\/li>\n\n\n\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>Slowing Iterations:<\/strong><br>Base model updates remain key to enhancing Agent performance. However, the speed of these iterations has slowed due to diminishing returns (the Scaling Laws\u2019 marginal gains) and the rising resource barriers that concentrate \u201cmodel ownership\u201d among top-tier companies.<\/li>\n\n\n\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>Vertical Integration Over Disruption:<\/strong><br>The evolution of large models will likely push leading vertical application providers to upgrade rather than completely disrupt market dynamics. In the race for super applications, success will depend on robust ecological channel building and rapid integration of local services like maps, payment, and lifestyle, gradually transforming user interaction from simple Q&amp;A to embedded daily experiences.<\/li>\n\n\n\n<li class=\"has-cyan-bluish-gray-background-color has-background\"><strong>From Generic Tech to Domain-Specific Expertise:<\/strong><br>Much like the shift from the Internet era to the mobile era, the intelligent application era driven by large models marks a transition from general-purpose technology to enhanced domain expertise. In the future, the barriers for Agent applications will shift from pure engineering challenges to the accumulation of user insights, scenario knowledge, and industry understanding.<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Large Model Applications as the Gateway to the Internet<\/h3>\n\n\n\n<p>In March, OpenAI CEO Sam Altman was asked in an interview [3]:<br><em>\u201cIn five years, what will be more valuable: a website with 1 billion daily active users that requires no customer acquisition, or the most advanced model?\u201d<\/em><br>After a two-second pause, he chose \u201ca website with 1 billion daily active users.\u201d Altman envisions OpenAI as the gateway to the Internet\u2014users would use their OpenAI accounts with usage credits or custom models to access any third-party service integrated with the OpenAI API.<br>He said, \u201cThis is truly the key to becoming a great platform.\u201d<\/p>\n\n\n\n<p>This vision has already begun to emerge in OpenAI\u2019s Agent product <strong>Operator<\/strong>, released in January. Operator can search the web to plan travel, write reports, provide shopping advice, and integrate with services such as DoorDash, Uber, and eBay. Although Operator hasn\u2019t yet made a huge impact, the trend of large-model applications driving web traffic is becoming unmistakable.<\/p>\n\n\n\n<p>Adobe Analytics revealed [4] that since September 2024 the traffic driven by large-model applications has doubled every two months. In the last two months of last year, this traffic grew 1200% year-over-year. Relative to traditional sources, this AI-driven traffic sees an 8% longer website dwell time, 12% more page views, a 23% lower bounce rate, and\u2014though conversion rates are 9% lower\u2014they\u2019re still on the rise. Additionally, a survey of 5000 U.S. consumers found nearly 40% are using AI-assisted shopping, and over half plan to do so later this year.<\/p>\n\n\n\n<p>Many e-commerce and local services heavily depend on on-site recommendation advertising (think Amazon, Alibaba, JD.com, Pinduoduo, Meituan, etc.). If an AI\u2014not a human\u2014is visiting these sites, will the traditional ad systems still be effective?<br>At the February earnings call, when asked about how Agents might affect its e-commerce business, Amazon CEO Andy Jassy offered an ambiguous answer: \u201cMost retailers already have some form of interaction with Agents, and we\u2019re no different.\u201d<br>Meanwhile, Walmart\u2019s U.S. CTO Hari Vasudev proposed a countermeasure: \u201cBuild your own Agent to interact with other Agents, recommend products, or provide more detailed product information.\u201d<\/p>\n\n\n\n<p>An AI strategy expert at the roundtable noted that even if the primary traffic channels change, the established giants and companies with deep industry expertise will retain a competitive advantage. With the gap between open-source and closed-source model performance narrowing, application vendors can now afford low-cost model capabilities\u2014finetuning a domain-specific model with supervised data (SFT) becomes feasible. Companies that have historically accumulated users, data, and IT capabilities are better positioned in this competition.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">The Compute Investment Debate &amp; Nvidia\u2019s Inference Story<\/h3>\n\n\n\n<p>In March, the debate over compute investment continued, with Nvidia\u2019s stock experiencing persistent fluctuations: a 13% drop in the first 10 days of the month, a rebound, then another fall.<br>This volatility stems from two key unresolved issues:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Will compute consumption shift from training to inference, and will Nvidia\u2019s GPUs remain the only choice?<\/li>\n\n\n\n<li>Are current compute investments nearing saturation?<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"684\" src=\"https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/640-1-1-1024x684.png\" alt=\"\" class=\"wp-image-6057\" srcset=\"https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/640-1-1-1024x684.png 1024w, https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/640-1-1-300x200.png 300w, https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/640-1-1-768x513.png 768w, https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/640-1-1-900x600.png 900w, https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/640-1-1.png 1080w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Recent signs even suggest a potential oversupply of compute power:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Smaller Models, Better Performance:<\/strong><br>Google\u2019s open-source 27-billion parameter Gemma 3 model outscored the older DeepSeek-V3 (with 671 billion parameters activated at 37 billion per answer) in Chatbot Arena; Alibaba\u2019s 32-billion parameter inference model QwQ almost matches R1\u2019s performance.<\/li>\n\n\n\n<li><strong>Cutting AI Spend:<\/strong><br>Media reports indicate that because companies like DeepSeek, Alibaba, and Google have released models that require less compute but still perform well, many U.S. companies have cut back on their AI expenditures.<\/li>\n\n\n\n<li><strong>Competitive Pricing &amp; New Chips:<\/strong><br>AWS is selling its Trainium chips (with the same compute as Nvidia\u2019s H100) at 25% of the price, while Google is working with MediaTek to lower AI chip costs further.<br>After Microsoft CEO Satya Nadella, Alibaba Chairman Jack Ma warned that some U.S. data centers might be building new facilities at \u201cbubble\u201d prices. Yet, Apple\u2014previously reluctant to work with Nvidia\u2014has started purchasing Nvidia chips, renewing market confidence.<\/li>\n<\/ul>\n\n\n\n<p>At the March GTC, Nvidia CEO Jensen Huang shared a new inference story:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Increased Inference Demand:<\/strong><br>After OpenAI introduced the o1 inference model, the demand for AI compute shot up to 100 times Nvidia\u2019s expectations from last year. Inference-capable AI deconstructs problems step by step, approaching and choosing the best answers in multiple ways. The number of tokens generated can easily exceed previous models by a hundredfold.<\/li>\n\n\n\n<li><strong>Power Constraints &amp; GPU Advantage:<\/strong><br>With data centers facing limited electrical resources, Nvidia\u2019s B-series GPUs deliver 25 times the performance of the H-series at the same power consumption. The upcoming Vera Rubin architecture, expected in 2026, should further amplify this efficiency.<br>Nvidia\u2019s Dynamo software dynamically adjusts how GPUs handle token processing to maintain both user experience and computational efficiency. Huang summed it up: \u201cWhen the B-series GPUs start shipping in bulk, you won\u2019t even be able to give away the H-series for free.\u201d<br>Previously, his catchphrase was \u201cthe more you buy, the more you save\u201d\u2014now it\u2019s \u201cthe more you buy, the more you earn.\u201d<\/li>\n<\/ul>\n\n\n\n<p>Not every task requires an inference model. Smaller inference models\u2014such as o3 mini or QwQ-32B\u2014can also perform well. Ultimately, while inference does raise token consumption, whether it will escalate a hundredfold remains to be seen.<\/p>\n\n\n\n<p>At the event, an expert in AI inference commented:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"574\" src=\"https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/\u5fae\u4fe1\u56fe\u7247_20250408101538-1024x574.png\" alt=\"\" class=\"wp-image-6058\" srcset=\"https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/\u5fae\u4fe1\u56fe\u7247_20250408101538-1024x574.png 1024w, https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/\u5fae\u4fe1\u56fe\u7247_20250408101538-300x168.png 300w, https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/\u5fae\u4fe1\u56fe\u7247_20250408101538-768x431.png 768w, https:\/\/blog.nexussup.com\/wp-content\/uploads\/2025\/04\/\u5fae\u4fe1\u56fe\u7247_20250408101538.png 1330w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cAgent applications that seem to consume an excessive amount of compute still have significant room for optimization. When Agents browse the web to fetch information, they might capture massive amounts of irrelevant data (for example, screenshots where 99% of the pixels are useless), which greatly increases compute costs.\u201d<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Mergers, Acquisitions &amp; Funding Trends<\/h3>\n\n\n\n<p><strong>M&amp;A Activity and Strategic Expansion:<\/strong><br>In March, the announced volume of major M&amp;A deals exceeded the total of the previous three months\u2014with six transactions valued at over $100 million publicly revealed, and several more still under negotiation.<\/p>\n\n\n\n<p>The AI industry is shifting from competing solely on technology or products to integrating entire ecosystems. Leading companies are actively broadening their business boundaries and building ecological moats. For example:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Nvidia acquired synthetic data company Gretel for $320 million and is negotiating a multi-hundred-million-dollar deal to acquire Lepton AI (founded by Alibaba\u2019s former VP, Jia Yangqing) to expand from compute acceleration to inference and data services.<\/li>\n\n\n\n<li>Elon Musk\u2019s xAI is using equity to acquire Twitter (rebranded as X), integrating data, models, compute, distribution channels, and talent. In this deal, xAI is valued at $80 billion and X at $33 billion.<\/li>\n\n\n\n<li>Google, ServiceNow, and UiPath have made significant acquisitions to expand their enterprise service ecosystems.<\/li>\n<\/ul>\n\n\n\n<p>At the same time, CoreWeave\u2014a startup specializing in GPU compute leasing backed by Nvidia investments\u2014went public, raising $1.5 billion after earlier plans for a $4 billion raise.<\/p>\n\n\n\n<p><strong>Funding Trends in Vertical AI Applications:<\/strong><br>In March, 31 AI companies raised more than $50 million, an increase of eight from the previous month. Funding appears steady, with headlining deals in the base model space:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>OpenAI raised an additional $40 billion, bringing total funding to $58.6 billion with a valuation above $300 billion.<\/li>\n\n\n\n<li>Anthropic secured $3.5 billion, totaling $18 billion in funding and reaching a $61.5 billion valuation.<\/li>\n<\/ul>\n\n\n\n<p>In China, companies like Zhipu announced investments from state-owned funds across Hangzhou, Zhuhai, and Chengdu totaling \u00a51.8 billion, cumulatively surpassing \u00a510 billion, marking milestones in corporate restructuring and preparations for public listing.<\/p>\n\n\n\n<p>Meanwhile, in the infrastructure domain, companies in GPU compute leasing and AI chip development (such as Israel\u2019s Retym and Nexthop AI) have also received significant investments.<br>Turing\u2014providing programming data for OpenAI, Google, and others\u2014raised $111 million at a $2.2 billion valuation, with annual revenues of $167 million and proven profitability.<br>Scale AI is now pushing an old-stock deal valued at $25 billion, an 80% increase from May last year, while also expanding into data collection services for humanoid robotics.<\/p>\n\n\n\n<p><strong>Humanoid Robotics See a Funding Surge:<\/strong><br>Domestic projects like Zhiyuan Robotics, Tashi Intelligence, Qianxun Intelligence, and Vitadyn have all closed deals at the \u00a5100+ million level, with the highest valuation reaching \u00a515 billion.<br>Overseas, Agility Robotics raised $400 million, Dexterity secured $95 million, and Apptronik raised $350 million (plus an additional $50 million following a previous $350 million round). SoftBank led a $130 million round for Terabase Energy, which uses robotics to build solar farms.<\/p>\n\n\n\n<p><strong>Application-Focused Funding:<\/strong><br>Large-scale funding is predominantly flowing into startups using large-model technology to transform vertical industries\u2014ranging from programming, healthcare, enterprise data services, financial fraud prevention, logistics, to drug discovery\u2014with at least 18 companies in this space.<br>Most of these companies were founded before ChatGPT\u2019s release, and have already accumulated stable customers and data resources. They aren\u2019t simply wrapping a large model\u2014they\u2019re striving to integrate large models with vertical scenarios, using AI to transform traditional processes and unlock new growth.<\/p>\n\n\n\n<p>Many investors view this direction as a prime entrepreneurial opportunity, noting that these areas demand long-term, deep accumulation to become competitive, and the potential revenue scale hasn\u2019t yet reached the level that attracts the giants. Several Silicon Valley investors told the media that they are now inundated with AI application startup proposals covering the entire industry.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Peering Into the \u201cBlack Box\u201d: The AI Microscope<\/h3>\n\n\n\n<p>Even though the output from large models can appear reasonable, the internal decision process is opaque<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Agent Challenges &amp; In&#8230;<\/p>\n","protected":false},"author":2,"featured_media":6056,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"sfsi_plus_gutenberg_text_before_share":"","sfsi_plus_gutenberg_show_text_before_share":"","sfsi_plus_gutenberg_icon_type":"","sfsi_plus_gutenberg_icon_alignemt":"","sfsi_plus_gutenburg_max_per_row":"","footnotes":""},"categories":[27],"tags":[],"class_list":{"0":"post-6055","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai-power"},"_links":{"self":[{"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=\/wp\/v2\/posts\/6055","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6055"}],"version-history":[{"count":1,"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=\/wp\/v2\/posts\/6055\/revisions"}],"predecessor-version":[{"id":6059,"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=\/wp\/v2\/posts\/6055\/revisions\/6059"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=\/wp\/v2\/media\/6056"}],"wp:attachment":[{"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6055"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6055"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.nexussup.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6055"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}