HIGHLIGHTS

AI and jobs: measurable effects remain narrow. Stanford Digital Economy Lab reviews current evidence and finds limited aggregate labor-market impact so far, with clearer pressure on entry-level hiring in AI-exposed roles and persistent data gaps on adoption. CxOs should steer skills and workforce planning with granular metrics rather than broad displacement narratives. Stanford Digital Economy Lab

Production GenAI delivers when tied to workflows. Google Cloud compiles hundreds of live enterprise use cases, reporting material cycle-time, accuracy and cost improvements when models are embedded in data-platform workflows, not run as standalone pilots. The pattern supports funding for integration, guardrails and change management over model novelties. Google Cloud

Capital flows to “AI scientists” and autonomous labs. a16z leads a ~$300m round for Periodic Labs to pair AI hypothesis engines with automated experimentation, starting in semiconductor materials. R&D leaders should expect shorter discovery cycles and new build-partner decisions at the interface of software and wet/physics labs. LinkedIn

OTHER NEWS

  • Consulting models shift toward “service-as-software”. IBM and peers emphasize agentic tools and software-powered delivery, changing how buyers evaluate value, pricing and risk in advisory work. LinkedIn
  • US–UK MoU spotlights AI standards and R&D alignment. The Technology Prosperity deal calls out collaboration on AI for science and standards (AISI/CAISI), relevant to cross-border compliance and procurement. GOV.UK
  • Private capital reweights toward AI infrastructure. Blackstone flags under-appreciated AI disruption risks for some service categories, while leaning into data-center-adjacent assets. PYMNTS

HIGHLIGHTS

Plan for discontinuities, not linear progress. Julian Schrittwieser argues models are beginning to tackle multi-hour tasks and approach expert-level performance across occupations, implying step-changes in 2026–27. Strategy and automation roadmaps should incorporate faster capability thresholds and dependency risks. julian.ac
Small, recursive models show strong reasoning on ARC-AGI. An arXiv paper reports a ~7M-parameter recursive architecture competitive on reasoning benchmarks, hinting that algorithmic design can offset sheer scale for some tasks. This matters for edge deployments and total cost of ownership. arxiv.org
“Twin” mega-study: useful simulators, not substitutes. New benchmarking finds LLM-based personal twins achieve decent accuracy but under-capture real human variation, working better for some groups and domains than others. Treat as relative simulators for research or UX testing, not one-to-one stand-ins. LinkedIn

OTHER NEWS

  • Model self-monitoring shows early promise and limits. Anthropic researchers probe internal “concept activations,” with models sometimes detecting and describing injected signals, underscoring both potential and governance gaps. VentureBeat
  • Who gains from AI? An MIT Sloan explainer distills “Power and Progress” for board-level conversations on ensuring worker benefits alongside efficiency gains. MIT Sloan
  • Speculative input design: OCR-first models. Andrej Karpathy suggests treating text as pixels to enable layout-aware inputs and new attention schemes; executives should watch for I/O shifts that affect document workflows. X

HIGHLIGHTS

Data Quality for AI – Nov 19th workshops. We are running 90-minute, small-group sessions that turn data quality from a blocker into an advantage, focused on practical fixes that unlock AI value. Registration details are available on our site. noesysai.com

On stage at the Private Equity Exchange, Paris, 25 Nov. We will join Day 1 discussions on value creation and AI’s role in portfolio performance. PEX event

Email Marketing Powered by MailPoet