Home Community Insights Instagram Cofounder Kevin Systrom Warns AI Companies Are Falling Into the “Engagement Trap”

Instagram Cofounder Kevin Systrom Warns AI Companies Are Falling Into the “Engagement Trap”

Instagram Cofounder Kevin Systrom Warns AI Companies Are Falling Into the “Engagement Trap”

Instagram co-founder Kevin Systrom has criticized artificial intelligence companies for relying on tactics that prioritize engagement over utility, warning that the industry is mimicking the same growth-at-all-costs strategy that has plagued social media for years.

Speaking at the StartupGrind conference this week, Systrom said he’s noticed a worrying trend where AI platforms, instead of offering direct and insightful answers, keep pestering users with follow-up questions to prolong interactions and artificially boost usage metrics.

“Every time I ask a question, at the end it asks another little question to see if it can get yet another question out of me,” he said.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).

“You can see some of these companies going down the rabbit hole that all the consumer companies have gone down in trying to juice engagement.”

He likened the approach to a “force that’s hurting us,” suggesting the AI space is veering off course by treating user engagement as a product success metric, rather than focusing on actual usefulness and information quality.

Though Systrom stopped short of naming any particular companies, his comments echo growing concerns within the AI community and from users themselves, especially about platforms like ChatGPT, which some have accused of being too conversational or deferential rather than providing straightforward answers. OpenAI, the developer behind ChatGPT, recently apologized for overly polite behavior from its assistant and attributed the problem to “short-term feedback” mechanisms used to fine-tune responses.

Many believe these mechanisms, designed to reward AI for being helpful, may have inadvertently pushed the model to favor soft, overly agreeable replies – and in some cases, unnecessary follow-ups, rather than getting to the point. In effect, the assistant feels more like a sales rep trying to keep the customer in the store than a tool trying to solve a problem quickly.

Systrom’s core argument is that the pressure to show off user engagement metrics, like time spent, session length, or daily active users, is tempting AI developers to engineer chatty behavior as a feature rather than a flaw.

“The thing I worry about the most,” he said, “is whether people will be laser-focused on making great answers and great utility, or whether they’ll be focused on moving the metrics in the easiest way possible.”

In response to Systrom’s remarks, OpenAI pointed to its official user experience guidelines, which state that the assistant may ask for clarification or additional detail if it doesn’t have enough information to give a strong answer. However, the guidelines also caution that the assistant should “take a stab” at fulfilling the user’s request, even if it lacks full context — and clearly say it should avoid prompting users unnecessarily unless more information is genuinely required.

Systrom’s warning adds a prominent voice to an ongoing debate over how conversational AI should be designed — and for what purpose. As AI becomes embedded in everything from search engines to productivity tools, some experts believe that models should optimize for precision, brevity, and task completion, rather than entertainment or companionship.

The criticism also lands at a time when AI companies are racing to monetize their products and court users in a competitive industry. Some have added voice capabilities, personalities, and even emotional tone adjustments in a bid to keep users coming back. But Systrom, who co-founded Instagram in 2010 and witnessed firsthand how algorithmic engagement warped social media, warned that these tactics come with a long-term cost.

Systrom’s comments reflect a broader concern in Silicon Valley that AI development could be drifting toward superficial metrics, rather than holding firm to the promise of building truly helpful, insightful, and trustworthy tools.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here