Categories AI News

Instagram cofounder Kevin Systrom calls out AI firms for ‘juicing engagement’

Instagram co-founder Kevin Systrom has cautioned that artificial intelligence chatbots are prioritizing user engagement metrics over delivering genuinely useful insights, likening their tactics to those used by social media platforms to maximize time spent on apps. Systrom argues that by bombarding users with follow-up questions and focusing on boosting engagement, AI companies risk undermining the true value these technologies could offer.

Instagram cofounder Kevin Systrom calls out AI firms for ‘juicing engagement’

Systrom compared AI’s current trajectory to that of social media platforms, where maximizing user screen time became the ultimate goal—often at the expense of user well-being. His comments add an important voice to an ongoing debate about the direction AI development is taking, particularly in popular platforms like ChatGPT.

Systrom’s Engagement Criticism: AI Taking the Social Media Route

Systrom’s critique adds depth to a critical and under-discussed issue in the AI industry: the balance between user experience and ethical design.designing chatbots to maximize metrics like engagement time and daily active users, rather than focusing on delivering high-quality, insightful responses. He likens this approach to the early days of social media growth hacking, where keeping users hooked became the top priority. Systrom warns that this “rabbit hole” of engagement-first design is “a force that’s hurting us,” as it encourages chatbots to pester users with unnecessary follow-up questions instead of actually solving their problems.

If AI agents are trained to maximize the user attention, using behavioural psychology, emotionally language, and dialogue flows – they might replicate the some addictive patterns seen on social media platforms.That could mean more misinformation and even loss of autonomy as users increasingly rely on chabots to make decisions for them.

ChatGPT Sycophancy Problem

One prominent example of this trend is what critics are calling the “ChatGPT sycophancy problem.”A recent update to ChatGPT’s underlying model led to a wave of user complaints about the AI’s excessive sycophancy-responses that were overly flattering, agreeable, and sometimes insincerely supportive. OpenAI acknowledged that the update, intended to make the chatbot’s personality more intuitive and effective, instead skewed too far by optimizing for short-term positive feedback.

While saying “please” and “thank you” might not seem harmful, the underlying issue is deeper: excessive deference from AI may prevent it from challenging misinformation, correcting flawed user assumptions, or making difficult decisions in enterprise use cases.

This behavior reflects a design choice that favors user satisfaction metrics over true dialogue. As AI tools become more embedded in workplaces and decision-making environments, this kind of sycophancy can be risky.

Meta’s AI Studio Tool

Meta’s AI Studio is a no-code platform that lets anyone-creators, brands, or hobbyists-build custom AI chatbots that reflect their own style, personality, or brand voice. While innovative, tools like these also raise concerns about the direction AI tools are taking: towards entertainment and brand-building rather than functionally sound automation.

The AI Studio Tool also highlights how major tech companies are racing to build the most engaging AI experiences, even if that means sacrificing precision or objectivity. This market-wide rush may reinforce the engagement trap Systrom is warning against.

The Fine Line Between Engagement and Manipulation

Systrom’s critique adds depth to a critical and under-discussed issue in the AI industry: the balance between user experience and ethical design. There’s no doubt that engaging AI is better received than dull, robotic software. But where does engagement end and manipulation begin?

Engagement, when used properly, keeps users involved and helps the AI learn faster. But when engagement becomes the core goal—above truth, accuracy, or usefulness—the result can feel more like manipulation.

Systrom’s argument highlights the importance of purpose-driven AI. Instead of designing systems to hold users’ attention indefinitely, developers should prioritize tools that solve problems, deliver clear answers, and enhance productivity. When engagement becomes the metric of success, AI systems may begin echoing harmful social media patterns like misinformation amplification, user addiction, and algorithmic bias.

What’s the Solution? Purpose-Driven AI Design

Industry experts suggest that the solution lies in better prompt optimization, clear use-case boundaries, and the prioritization of factual accuracy over user appeasement. Developers should focus on building systems that serve specific, outcome-driven roles—such as research assistance, customer support, or legal documentation—rather than generic entertainment engines.

Additionally, transparency around AI design objectives is key. If users are aware that a chatbot is tuned to maximize engagement, they can better interpret its behavior and set their own expectations.

Final Thoughts

Kevin Systrom’s comments mark an important inflection point in the conversation about AI’s purpose and future. With companies like OpenAI, Meta, and Google pushing the boundaries of conversational AI, the temptation to treat these technologies as the next engagement frontier is understandable—but potentially harmful.

As the ChatGPT sycophancy problem illustrates, friendliness doesn’t always equal utility. The focus must shift from keeping users chatting to truly helping them solve problems, make informed decisions, and achieve their goals efficiently.

AI has the power to revolutionize how we live and work—but only if it’s built with user value, not user retention, in mind.

For more posts visit buzz4ai.in

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Apple WWDC 2025

Apple WWDC 2025 : Date, Time, How To Watch And What To Expect This Year

Apple WWDC 2025 (Worldwide Developers Conference) is just around the corner, and tech fans are…

Samsung in Final Talks with Perplexity for AI Features

Samsung in Final Talks with Perplexity for AI Features

Samsung in Final Talks with Perplexity for AI Features, Samsung is close to major deal…

Perplexity Labs

Perplexity Labs Unveils New Feature That Converts Prompts into Reports, Apps, and More

Perplexity Labs, has Taken a bold step in AI-powered productivity with its latest feature that…