Understanding the Values Behind the Machines
Can artificial intelligence have values? According to a recent study by researchers at the AI Alt Lab, the answer is yes, though not in the human sense. Large Language Models (LLMs), like ChatGPT and Claude, exhibit patterns that resemble human value preferences when subjected to tools like the Portrait Values Questionnaire (PVQ-RR), a psychological test used to assess human priorities.
These value-like tendencies are not signs of consciousness or intent but reflect patterns in training data, developer choices, and programmed guardrails. For example, many LLMs scored high on universalism and benevolence, indicating a tilt toward prosocial and humanitarian responses. At the same time, values like power, tradition, and security ranked consistently low across the board.
Why It Matters
AI is increasingly embedded in decision-making—from hiring to customer service and even product design. Understanding what “values” an AI model tends to express helps businesses, educators, and developers ensure that AI outputs are aligned with organizational goals and ethical standards.
Call to Action
📌 As AI use becomes more widespread, shouldn’t you know what your AI stands for? Start by evaluating the value orientations of your tools, because blind adoption is no longer an option.