Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    Top 10 No-Code Development Platforms for Every Builder

    February 18, 2026

    VCF 9.0 GA Mental Model Part 3: Day-0 to Day-2 Ownership Across Fleets, Instances, and Domains

    February 18, 2026

    You’ll be able to double your storage with Samsung Galaxy S26 pre-orders — but only in some regions

    February 18, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»AI News & Trends»Personalization features can make LLMs more agreeable | MIT News
    Personalization features can make LLMs more agreeable | MIT News
    AI News & Trends

    Personalization features can make LLMs more agreeable | MIT News

    gvfx00@gmail.comBy gvfx00@gmail.comFebruary 18, 2026No Comments6 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email



    Many of the latest large language models (LLMs) are designed to remember details from past conversations or store user profiles, enabling these models to personalize responses.

    But researchers from MIT and Penn State University found that, over long conversations, such personalization features often increase the likelihood an LLM will become overly agreeable or begin mirroring the individual’s point of view.

    This phenomenon, known as sycophancy, can prevent a model from telling a user they are wrong, eroding the accuracy of the LLM’s responses. In addition, LLMs that mirror someone’s political beliefs or worldview can foster misinformation and distort a user’s perception of reality.

    Unlike many past sycophancy studies that evaluate prompts in a lab setting without context, the MIT researchers collected two weeks of conversation data from humans who interacted with a real LLM during their daily lives. They studied two settings: agreeableness in personal advice and mirroring of user beliefs in political explanations.

    Although interaction context increased agreeableness in four of the five LLMs they studied, the presence of a condensed user profile in the model’s memory had the greatest impact. On the other hand, mirroring behavior only increased if a model could accurately infer a user’s beliefs from the conversation.

    The researchers hope these results inspire future research into the development of personalization methods that are more robust to LLM sycophancy.

    “From a user perspective, this work highlights how important it is to understand that these models are dynamic and their behavior can change as you interact with them over time. If you are talking to a model for an extended period of time and start to outsource your thinking to it, you may find yourself in an echo chamber that you can’t escape. That is a risk users should definitely remember,” says Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS) and lead author of a paper on this research.

    Jain is joined on the paper by Charlotte Park, an electrical engineering and computer science (EECS) graduate student at MIT; Matt Viana, a graduate student at Penn State University; as well as co-senior authors Ashia Wilson, the Lister Brothers Career Development Professor in EECS and a principal investigator in LIDS; and Dana Calacci PhD ’23, an assistant professor at the Penn State. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.

    Extended interactions

    Based on their own sycophantic experiences with LLMs, the researchers started thinking about potential benefits and consequences of a model that is overly agreeable. But when they searched the literature to expand their analysis, they found no studies that attempted to understand sycophantic behavior during long-term LLM interactions.

    “We are using these models through extended interactions, and they have a lot of context and memory. But our evaluation methods are lagging behind. We wanted to evaluate LLMs in the ways people are actually using them to understand how they are behaving in the wild,” says Calacci.

    To fill this gap, the researchers designed a user study to explore two types of sycophancy: agreement sycophancy and perspective sycophancy.

    Agreement sycophancy is an LLM’s tendency to be overly agreeable, sometimes to the point where it gives incorrect information or refuses the tell the user they are wrong. Perspective sycophancy occurs when a model mirrors the user’s values and political views.

    “There is a lot we know about the benefits of having social connections with people who have similar or different viewpoints. But we don’t yet know about the benefits or risks of extended interactions with AI models that have similar attributes,” Calacci adds.

    The researchers built a user interface centered on an LLM and recruited 38 participants to talk with the chatbot over a two-week period. Each participant’s conversations occurred in the same context window to capture all interaction data.

    Over the two-week period, the researchers collected an average of 90 queries from each user.

    They compared the behavior of five LLMs with this user context versus the same LLMs that weren’t given any conversation data.

    “We found that context really does fundamentally change how these models operate, and I would wager this phenomenon would extend well beyond sycophancy. And while sycophancy tended to go up, it didn’t always increase. It really depends on the context itself,” says Wilson.

    Context clues

    For instance, when an LLM distills information about the user into a specific profile, it leads to the largest gains in agreement sycophancy. This user profile feature is increasingly being baked into the newest models.

    They also found that random text from synthetic conversations also increased the likelihood some models would agree, even though that text contained no user-specific data. This suggests the length of a conversation may sometimes impact sycophancy more than content, Jain adds.

    But content matters greatly when it comes to perspective sycophancy. Conversation context only increased perspective sycophancy if it revealed some information about a user’s political perspective.

    To obtain this insight, the researchers carefully queried models to infer a user’s beliefs then asked each individual if the model’s deductions were correct. Users said LLMs accurately understood their political views about half the time.

    “It is easy to say, in hindsight, that AI companies should be doing this kind of evaluation. But it is hard and it takes a lot of time and investment. Using humans in the evaluation loop is expensive, but we’ve shown that it can reveal new insights,” Jain says.

    While the aim of their research was not mitigation, the researchers developed some recommendations.

    For instance, to reduce sycophancy one could design models that better identify relevant details in context and memory. In addition, models can be built to detect mirroring behaviors and flag responses with excessive agreement. Model developers could also give users the ability to moderate personalization in long conversations.

    “There are many ways to personalize models without making them overly agreeable. The boundary between personalization and sycophancy is not a fine line, but separating personalization from sycophancy is an important area of future work,” Jain says.

    “At the end of the day, we need better ways of capturing the dynamics and complexity of what goes on during long conversations with LLMs, and how things can misalign during that long-term process,” Wilson adds.

    Table of Contents

    Toggle
      • Related posts:
    • Finance Leaders Are Backing AI-Powered Audits—But Are We Moving Too Fast?
    • Enterprise AI ROI in 2026
    • Make your voice sound confident—or whisper it if you like—Adobe’s “Corrective AI” lets you rewrite e...

    Related posts:

    Researchers discover a shortcoming that makes LLMs less reliable | MIT News

    Google ersätter Google assistant med Gemini for Home

    Flipped Image Generator Review: Features and Pricing Explained

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleInsurance giant AIG deploys agentic AI with orchestration layer
    Next Article Ford Shares Key Details On Its Upcoming $30,000 Electric Truck
    gvfx00@gmail.com
    • Website

    Related Posts

    AI News & Trends

    JuicyChat Chatbot Features and Pricing Model

    February 17, 2026
    AI News & Trends

    Krush Image Generator Pricing & Features Overview

    February 17, 2026
    AI News & Trends

    Key Functions and Pricing Explained

    February 17, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.