Ramp x 1752vc 🚀

We’re proud to partner with Ramp to offer a $500 sign-up bonus.

Ramp helps teams save time and money with corporate cards, expenses, bill pay, and automated bookkeeping. Trusted by 50,000+ companies.

Click Here (currently only for US/Canada registered companies)

Introducing Our 14-Part AI Series

Artificial intelligence is advancing at an extraordinary pace. New models, research papers, and startups appear constantly, making it increasingly difficult to distinguish real breakthroughs from noise.

Today, we are launching a new 14 part AI series on VC Unfiltered. Over the coming weeks, we will break down the key ideas, technologies, and developments shaping the future of artificial intelligence, drawing from the latest research and insights from builders in the field.

We begin with the long history behind the idea of intelligent machines.

The Long Dream of Intelligent Machines

The idea of creating intelligent machines is much older than modern computing. Long before artificial intelligence became a scientific discipline, people imagined mechanical beings that could perceive the world and act independently.

Ancient mythology contains many of these visions. Greek legends describe Talos, a bronze giant who guarded the island of Crete. In ancient China, stories describe mechanical creatures capable of performing labor. Renaissance engineers such as Leonardo da Vinci sketched early designs for humanoid machines that could move and interact with their surroundings.

These early ideas were not about algorithms or data. They were about agency — the idea that a machine could sense its environment, make decisions, and act with purpose.

That same aspiration eventually became a scientific pursuit. In 1950, Alan Turing proposed one of the most famous questions in computing: Can machines think? His Turing Test reframed intelligence as behavior rather than internal consciousness. If a machine could interact with a human convincingly through conversation, it might reasonably be considered intelligent.

Over the following decades, artificial intelligence progressed through several major phases. Early symbolic systems attempted to encode knowledge through explicit rules. Later, machine learning systems began extracting patterns directly from data. Today, the emergence of large language models has pushed AI into a new stage — one where machines can reason over language, synthesize knowledge, and interact with humans in increasingly sophisticated ways.

Yet language models themselves are not the final form of AI. They are the starting point for something broader.

The real shift now underway is the transition from models to agents.

From Prediction Systems to Intelligent Agents

Large language models are fundamentally prediction systems. They analyze vast amounts of data and learn how to generate plausible continuations of text. This simple capability turns out to be extremely powerful. Language models can summarize documents, answer questions, generate code, and produce explanations across many domains.

But prediction alone does not make a system intelligent in the way humans understand intelligence.

A language model does not maintain long-term goals. It does not actively explore an environment. It does not decide when to take action or how to adapt its strategy over time. Each prompt-response interaction begins largely from scratch.

Agents address this limitation.

An AI agent is a system that perceives its environment and takes actions within that environment in order to achieve goals. Instead of responding to a single prompt, an agent can break down a task, plan a series of steps, evaluate progress, and revise its approach as new information appears.

The rise of large language models has dramatically accelerated this shift. When models are connected to memory systems, planning frameworks, and external tools, they begin to function less like passive assistants and more like autonomous problem solvers.

This is already visible in many modern systems. AI assistants can now search the web, execute code, analyze datasets, and coordinate with other software tools. Instead of generating isolated answers, they can carry out workflows.

The difference may seem subtle, but it represents a fundamental architectural change.

AI is moving from systems that generate information to systems that perform tasks.

How AI Agents Compare to Human Intelligence

The growing capabilities of AI agents inevitably invite comparisons with human intelligence.

In some areas, machines already outperform people. AI systems can search massive knowledge bases instantly, perform complex calculations at extraordinary speeds, and analyze enormous datasets in ways that would be impossible for a human researcher.

But human cognition remains far more adaptable.

The human brain operates using roughly twenty watts of power. Within that limited energy budget, it performs perception, reasoning, motor control, emotional processing, and memory simultaneously. Humans learn continuously from experience and adapt to unfamiliar environments with minimal instruction.

AI systems operate very differently. They rely on large computing infrastructure and massive datasets for training. Most learning occurs offline during training rather than continuously through experience. While modern models can simulate reasoning processes, they still lack many of the adaptive capabilities that humans take for granted.

This comparison highlights something important. Artificial intelligence is not simply trying to recreate the human brain. Instead, it is developing its own type of intelligence — one that combines large-scale computation with increasingly sophisticated reasoning and planning systems.

Understanding the differences between these forms of intelligence helps reveal where the next breakthroughs may come from.

What the Brain Reveals About Future AI Systems

The human brain is not a single unified processor. It is a collection of specialized systems working together.

Some regions process visual information. Others manage language, memory, emotional responses, or motor control. These components interact constantly, allowing humans to perceive complex environments and make decisions in real time.

Artificial intelligence has begun to replicate some of these capabilities.

Computer vision systems now rival human performance in many visual recognition tasks. Language models demonstrate remarkable ability to interpret and generate text. Reinforcement learning systems capture aspects of reward-based learning and decision-making.

However, several important capabilities remain underdeveloped.

Humans maintain long-term memories that accumulate over years of experience. Emotional systems help prioritize attention and motivate behavior. People continuously learn from interaction with their environment rather than requiring retraining on static datasets.

Current AI systems excel at language and pattern recognition, but they still struggle with long-term memory, adaptation, and real-world interaction.

These gaps suggest that the next stage of AI progress will not come from larger models alone. It will come from building architectures that combine multiple cognitive capabilities into unified systems.

The Emergence of Foundation Agents

This idea leads to the concept of the Foundation Agent.

Large language models are often described as foundation models because they provide a general-purpose capability that can support many different applications. A Foundation Agent extends that concept further by integrating multiple capabilities into a single architecture capable of operating over time.

Instead of focusing solely on prediction, a Foundation Agent includes systems for perception, memory, reasoning, planning, and action. These components allow the agent to interpret complex situations, maintain context across interactions, and pursue goals through multi-step strategies.

In practice, this means connecting several key elements.

Perception systems allow the agent to interpret information from text, images, or other inputs. Memory systems store knowledge from previous interactions. Reasoning modules analyze problems and develop strategies. Planning mechanisms break tasks into manageable steps. Action systems allow the agent to interact with software tools, digital environments, or physical systems.

When these components are combined, the result is something fundamentally different from a standalone language model. It becomes a system capable of learning, reasoning, and acting within a dynamic environment.

A New Architecture for Intelligent Systems

At the heart of every agent system is a simple loop.

The agent observes its environment.

It processes that information through internal reasoning systems.

It selects an action.

The environment changes as a result.

The agent observes the new state and continues the process.

This perception–reasoning–action cycle mirrors the way biological organisms interact with the world.

Importantly, actions can occur both externally and internally. An agent might take an external action such as writing code, retrieving information, or executing a command. But it can also take internal actions, such as reflecting on an error, revising a plan, or retrieving relevant memories.

These internal feedback loops are what allow agents to improve their behavior over time.

Why This Series Matters

The transition from models to agents is one of the most important shifts currently happening in artificial intelligence.

But understanding that shift requires looking beyond a single technology. Intelligent agents are not built from one breakthrough. They emerge from a collection of interacting systems that together create the architecture of intelligence.

This series explores those systems.

Each article focuses on a different layer of the emerging agent architecture.

In the next article, we explore cognition and reasoning — the mechanisms that allow agents to learn from experience and make decisions under uncertainty. Understanding how AI systems reason is critical because it determines whether they can solve complex problems rather than simply repeat patterns.

From there, we examine memory systems, which allow agents to maintain context across long time horizons and accumulate knowledge through experience.

Next comes world models, which enable agents to simulate possible futures and plan actions before executing them.

We then explore the motivational layer of intelligent systems: rewards and goal structures that guide decision-making, followed by emotion modeling, which introduces mechanisms for prioritization, urgency, and adaptive behavior.

The series then shifts toward how agents interact with the world through perception systems and action systems, allowing them to interpret complex environments and perform real tasks.

Once those foundations are in place, we explore how agents improve themselves through self-optimization, iterative reasoning with large language models, and online and offline self-improvement loops.

The final portion of the series examines what happens when many agents interact together. These articles explore multi-agent systems, collaborative agent architectures, collective intelligence, and the challenges of evaluating complex agent ecosystems.

Taken together, these topics outline the emerging architecture of agentic AI.

The First Step Toward a Larger Future

Large language models made this new generation of intelligent systems possible. They provided the reasoning and knowledge capabilities needed to interpret complex instructions and communicate with humans.

But they are only the first step.

As agent architectures evolve, they will incorporate richer memory systems, more accurate world models, stronger reasoning frameworks, and increasingly sophisticated forms of collaboration between multiple agents.

Eventually, these systems may operate not as isolated assistants but as coordinated networks of intelligent agents working together to solve complex problems.

In other words, language models opened the door.

Agents are what walk through it.

And the rest of this series explores what happens next.

Series Note: Derived from Advances and Challenges in Foundation Agents

This series draws heavily from the paper Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems (Aug 2, 2025). The work brings together an impressive group of researchers from institutions including MetaGPT, Mila, Stanford, Microsoft Research, Google DeepMind, and many others to explore the evolving landscape of foundation agents and the challenges that lie ahead. We would like to sincerely thank the authors and researchers who contributed to this outstanding work for compiling such a comprehensive and insightful resource. Their research provides an important foundation for many of the ideas explored throughout this series.

Learn More

Visit us at 1752.vc

For Aspiring Investors

Designed for aspiring venture capitalists and startup leaders, our program offers deep insights into venture operations, fund management, and growth strategies, all guided by seasoned industry experts.

Break the mold and dive into angel investing with a fresh perspective. Our program provides a comprehensive curriculum on innovative investment strategies, unique deal sourcing, and hands-on, real-world experiences, all guided by industry experts.

For Founders

1752vc offers four exclusive programs tailored to help startups succeed—whether you're raising capital or need help with sales, we’ve got you covered.

Our highly selective, 12-week, remote-first accelerator is designed to help early-stage startups raise capital, scale quickly, and expand their networks. We invest $100K and provide direct access to 850+ mentors, strategic partners, and invaluable industry connections.

A 12-week, results-driven program designed to help early-stage startups master sales, go-to-market, and growth hacking. Includes $1M+ in perks, tactical guidance from top operators, and a potential path to $100K investment from 1752vc.

The ultimate self-paced startup academy, designed to guide you through every stage—whether it's building your business model, mastering unit economics, or navigating fundraising—with $1M in perks to fuel your growth and a direct path to $100K investment. The perfect next step after YC's Startup School or Founder University.

A 12-week accelerator helping early-stage DTC brands scale from early traction to repeatable, high-growth revenue. Powered by 1752vc's playbook and Shopline’s AI-driven platform, it combines real-world execution, data-driven strategy, and direct investor access to fuel brand success.

12-week, self-paced program designed to help founders turn ideas into scalable startups. Built by 1752vc & Spark XYZ, it provides expert guidance, a structured playbook, and investor access. Founders who execute effectively can position themselves for a potential $100K investment.

An all-in-one platform that connects startups, investors, and accelerators, streamlining fundraising, deal flow, and cohort management. Whether you're a founder raising capital, an investor sourcing deals, or an organization running programs, Sparkxyz provides the tools to power faster, more efficient collaboration and growth.

Apply now to join an exclusive group of high-potential startups!

Keep Reading