top of page

Product Development & Innovation

​Building a great product is a creative, chaotic process which you won’t get right every time, so you have to also be learning from success and failure.” 
Gibson Biddle, former VP of Product at Netflix

Product Development refers to the dynamic process of creating, designing, and delivering products or solutions that address user needs while ensuring a seamless and relevant user experience. It is a balance between market insights, efficient execution, and technological innovation. Therefore, the main objectives of product development are to solve real customer problems, deliver customer value, and achieve product-market fit. Marty Cagan highlights four critical characteristics of a product: it must be “valuable”, “feasible”, “usable”, and “viable”.

Development

happy customer of digital product.jpg

Jobs to Be Done &

Product-Market Fit

technological innovation using by customer.jpg

Technological

Innovation

design of digital product used by customers.jpg

Product Design & User Experience

This step focuses on creating a product that meets customer expectations and aligns with the company’s and market’s goals. It’s about ensuring that the product is “viable”, means addresses real customer needs and finds its place in the market

Here, the focus is on leveraging new technologies to enhance the product’s capabilities and competitiveness. This step addresses the usage of emerging technologies in ways that provide a unique advantage or solve customer problems more effectively

This stage emphasizes the process of making the product "usable," ensuring that the user experience is intuitive, easy, and enjoyable. It involves understanding how users interact with the product and designing interfaces and features that cater to those needs

digital product is ready and delivered.jpg

Product Delivery

The “feasibility” aspect of product development is addressed here. It involves ensuring that the product can be delivered within the constraints of time, budget, and technical resources, while meeting quality standards.

paying for digital product by card online.jpg

Willingness to Pay

This step that completes product “viability” discusses how to align the product's value with its pricing strategy and business model. It involves understanding what customers are willing to pay and determining the right pricing structure to maximize revenue while ensuring customer satisfaction and business profitability.

happy customer of digital product.jpg

Creating Products by Addressing Jobs to Be Done and Achieving Product-Market Fit

If you understand the job, how to improve it becomes obvious

Clayton Christensen

The ultimate goal of any product is to gain its place in the market, meaning creating a product that customers want and love in a competitive landscape, while also meeting the business goals of the company. This is commonly referred to as achieving product-market fit. Product-market fit is not just about having a good product; it also emphasizes the importance of the product's alignment with a relevant and thriving market. Product-Market Fit was introduced in 2007 by Marc Andreessen, a venture capitalist and co-founder of Netscape, as a concept where the product resonates with the market and successfully meets the strong demand from customers.

​

Product-Market Fit (PMF) is a broader, holistic approach that requires working within a strategic context, ensuring that everything has been analyzed and aligned in a way that resonates with the market's needs and desires. In addition to Product-Market Fit (PMF), we also use the Jobs to Be Done (JTBD) concept, which, while seemingly similar, focuses more specifically on understanding which problem to solve and why. Although JTBD was initially used to describe the creation of innovative products, it is a highly valuable approach for any new product development.

Popularized in 2003 by Clayton Christensen, a professor at Harvard Business School and renowned for his work on disruptive innovation (as outlined in his book "The Innovator's Solution"), Jobs to Be Done (JTBD) emphasizes that products should be designed to help customers complete specific tasks or jobs. One of the most famous quotes that captures the essence of JTBD comes from Harvard Business School Professor Theodore Levitt, who said, “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole!” This perfectly summarizes the JTBD concept: customers don’t necessarily want the product itself; they want the solution it provides.

​

Many companies mistakenly think that customers want their product - like the drill - when, in fact, they need the solution it delivers - the hole. Jobs to Be Done (JTBD) invites us to focus not just on the customer or the product, but primarily on the customer's needs - specifically, what your customer wants and why they want it. This shift in focus helps product teams understand the underlying motivations driving customer behaviour, rather than merely categorizing them by demographics or product features. Understanding the "why" behind customer needs allows for more meaningful innovation and ensures that the product addresses the core issue at hand.

A such approach allows us to see how products evolve from one solution to another, like shifting from buying a car to using car-sharing services — because both fulfill the same need: transportation. This perspective helps us understand that customers don’t always care about the specific product; they care about the job it helps them accomplish.

In the same way that the most relevant segmentation focuses on how customers use a product rather than who they are, the best way to view your product is not as a set of features but as the solution it provides. Customers "hire" products to do specific jobs, and this job-defined market is much broader than a product-category-defined market. Christensen views the product as a three-dimensional whole with social, functional, and emotional aspects, which product teams must understand and translate into a clear product vision before designing their product.

​

When JTBD is clearly defined, the product purpose transcends functionality alone and transforms into a meaningful “purpose brand”. Think about companies like Google, Microsoft, and YouTube - brands that are more than just products; they represent something significant to customers. As Christensen puts it, “To build brands that mean something to customers, you need to attach them to products that mean something to customers.”

This is why it’s so crucial to define the Job To Be Done before diving into designing products and user experiences. By understanding the job first, you ensure that the product not only meets customer needs but resonates on a deeper level, fostering stronger connections

Common Pitfalls and Their Associated Cognitive Biases

1. Overlooking PMF

Overconfidence Bias - the tendency to overestimate one’s abilities or accuracy, leading to overly optimistic judgments and flawed decision-making

​Group Attribution Error - Assuming that the characteristics or actions of individuals within a group represent the entire group, or vice versa.

Stereotyping Bias - Generalizing characteristics, behaviors, or traits to individuals based on their membership in a particular group

Self-Consistency Bias - The tendency to perceive one’s past attitudes and behaviors as consistent with current ones, even if they have changed.

Confirmation bias - People seek information that confirms their existing beliefs and ignore conflicting evidence.

Groupthink - a psychological phenomenon in which people strive to maintain cohesion and reach consensus within a group

Authority Bias - Overvaluing opinions or decisions of authority figures, often at the expense of personal judgment or evidence.

2. Forget the User, Love the Product

Bike-shedding effect (Parkinson’s Law of Triviality) - Spending disproportionate time on trivial issues while neglecting more complex or significant ones, often because simpler topics feel more accessible.

Semmelweis Reflex - Rejecting new evidence or ideas because they challenge established norms or beliefs.

3. Overengineer the product

​Projection Bias - Assuming that one’s current preferences or emotions will remain consistent in the future.

Choice overload bias - Choice overload, also known as overchoice, choice paralysis, or the paradox of choice, describes how people get overwhelmed when they are presented with many options.

1. Overlooking PMF

The first category arises when the product team ignores or misunderstands the Product-Market Fit (PMF) concept and believes their product already meets PMF, leading them to skip the necessary analysis and focus solely on product development. This misperception usually occurs because there is no universal understanding of what Product-Market Fit (PMF) truly means, and there is no dedicated tool to measure it. While metrics such as market share, customer retention, customer engagement, or NPS can serve as useful indicators, these alone may not be sufficient. The key distinction lies in whether you're using vanity metrics - which look impressive on the surface but don’t necessarily provide meaningful insights - or true performance metrics that align with the defined product vision and strategy.

​

The main bias here is Overconfidence Bias, which occurs when individuals overestimate their knowledge, abilities, or the accuracy of their predictions. This can lead product teams to be overconfident in their understanding of the market or customers, assuming their product is already a good fit without validating it through proper customer research or market testing. As a result, they may overlook opportunities to ensure sustainable growth and fail to recognize when it's time to pivot.

 

Another bias contributing to this situation is Group Attribution Error, where one assumes that the characteristics of one person within a group are representative of the entire group. In this case, the product team might rely on feedback from one or a few customers, assuming that all customers share the same needs. This misperception could stem from Stereotyping Bias, which involves making generalized assumptions about a group of people based on limited information. For example, not all millennials or seniors have the same aspirations, especially considering their differing financial situations or dependency statuses, which can evolve over time.

Another bias that reinforces this situation is Self-Consistency Bias, which arises during the decision-making process and keeps our perceptions aligned with the judgments we've made in the past to maintain consistency with ourselves. Based on an initial product vision, the product team may continue prioritizing directions that were originally deemed important, and due to Confirmation Bias, they may seek out evidence that supports these assumptions, disregarding new data that challenges them.

Two other biases related to internal relationships between teams lead to deprioritizing or skipping the PMF approach: Groupthink and Authority Bias. Groupthink is a psychological phenomenon where people strive to maintain group cohesion by reaching a consensus, while Authority Bias occurs when one overvalues the opinion or decision of authority figures.

For product teams that have a tight relationship with many internal stakeholders, the relationship with the Sales team can be the most challenging, especially in B2B companies that sell to large enterprise clients, which generate significant revenue. In the case of Groupthink, the product team may seek to reach consensus with the sales team to maintain a good relationship, potentially prioritizing harmony over objective decision-making. On the other hand, Authority Bias can occur when the sales team, due to their direct access to clients, exerts pressure on the product team, asserting that they have a better understanding of the market and customer demands.

​

When the product team doesn’t actively engage with customers to gather insights, they may accept the sales team’s perspective, which could lead to a situation where the product team simply defers to sales requests. This creates an unhealthy dynamic where the product team becomes too reliant on sales feedback, ultimately transforming into a project team that is driven by salespeople’s requests to develop features for specific clients. This approach can detract from the broader product vision and PMF goals.

Instead of focusing on the long-term, scalable product-market fit, this dynamic risks the product becoming overly customized for individual clients, which can reduce its overall value in the broader market. The reliance on sales-driven features rather than a customer-driven approach may prevent the product from truly aligning with the wider market's needs, hindering the achievement of PMF.

When Product-Market Fit (PMF) is not a product priority, the company may ask the marketing team to work downstream on the product’s value proposition and marketing materials, presenting a supposed relevance of the product development approach. In this scenario, the marketing team may leverage Framing Bias when crafting the narrative around the product. Framing Bias occurs when one’s judgment or decision is influenced by how information is presented rather than the actual content.

For example, marketing may highlight certain features or benefits of the product in a way that makes it appear more aligned with customer needs than it truly is. While this can drive initial interest, it can lead to a disconnect when customers engage with the product and find that it doesn’t meet the expectations set by the marketing materials.

It’s important to note that presenting customer information or product benefits in a misleading or unethical way can damage the trust relationship with customers and negatively impact the company’s reputation. If customers feel misled by the marketing story, it can lead to increased churn and a lack of loyalty, undermining long-term success.

Therefore, it is critical that the product and marketing teams remain aligned on honesty and transparency, ensuring that both the product’s true value and its marketing message are consistently communicated. This alignment fosters credibility and helps build a strong, trust-based relationship with customers, ultimately supporting the long-term goal of achieving Product-Market Fit.

Use Case - Quibi

In April 2020, Quibi, a new streaming entertainment platform, was launched by Hollywood mogul Jeffrey Katzenberg and tech expert Meg Whitman, former eBay CEO. This initiative raised nearly $2 billion in funding, backed by major investors like Disney, Comcast’s NBCUniversal, and AT&T’s WarnerMedia. Quibi focused on short-form videos, around 10 minutes, designed for mobile-first viewing, aimed at young people on the go. Priced at $4.99 per month, Quibi projected to reach 7 million subscribers. However, after its launch, it only managed to acquire around 500,000 subscribers. Despite the involvement of big names, large funding, and high-quality content, Quibi shut down just six months after its debut.

​

Although several reasons were suggested for its failure, such as poor timing (especially with the onset of the COVID-19 pandemic, when people stayed home and weren’t on the go), the core issue was the inability to identify the true customer needs required to achieve Product-Market Fit (PMF). The company identified short-form videos as an unmet need and assumed that there was a market gap for them, probably thinking of Netflix as a competitor. However, Quibi overlooked the fact that platforms like Instagram, YouTube, and TikTok were already offering short-form video content that met customer needs in a more engaging and accessible way. Quibi's assumption about customer demand was not grounded in robust customer research or a thorough understanding of viewing habits.

 

The key issue was that it wasn’t clear enough what Quibi brought to the table that didn't already exist on other platforms. Overconfidence in their own concept led Quibi to focus on producing high-quality content from the outset, whereas other streaming services like Netflix initially relied on existing content before moving to original productions after they had built up a customer base. This decision led Quibi to start with a limited variety of content at high production costs, which proved unsustainable.

 

The lesson from Quibi’s failure is clear: Before heavily investing in a product, it’s crucial to validate the customer problem you are aiming to solve. The company’s inability to fully understand customer preferences and how existing platforms were already meeting those needs ultimately led to their downfall

​

Overcoming Cognitive Bias while Creating Product

  • Validating Market Relevance

Is there a market for my product, and does it meet a real customer need?

Ensuring the team is not overly attached to their ideas and that the product fulfills a meaningful need in the market.

​

  • Prioritizing Customer-Centric Decision-Making

In our decision-making, are we focusing on the customer problem, or are we overly focused on the product and our internal attachments to it?

We must ensure that decisions are driven by customer feedback and market demand, not just internal preferences.

​

  • Aligning and Reassessing Product-Market Fit

Are we aligned with the customer about the PMF of our product?

Ensure that we have gathered customer insights and that the vision customers have about the value of our product reflects our internal vision.

technological innovation using by customer.jpg

Introducing Technological Innovation

The technology you use impresses no one. The experience you create with it is everything.” –

Sean Gerety

In this article, Introducing Technological Innovation in Product Development refers to the process of integrating new or advanced technologies into a product to improve its performance, user experience, operational efficiency, or scalability, making the product more competitive. This can involve adopting emerging technologies like AI, blockchain, IoT, or simply upgrading the existing technology stack to enhance product performance.

However, the value of advanced technology integrated into a product is not always perceived in the same way by customers and the product team. The product team, being more technologically adept and deeply involved in development, often overestimates the importance of the technology itself. This can lead to a phenomenon where the product team, by primarily focusing on technical sophistication, overlooks real customer needs. This disconnect can ultimately hinder adoption and market success, as the product may fail to resonate with users or address their actual pain points effectively.

Innovative technology refers to new, cutting-edge advancements or breakthroughs in science and engineering that create new possibilities. These innovations could be in the form of hardware (smartphones, virtual glasses), software (quantum software), algorithms (AI), or systems (blockchain) that offer new capabilities.

There is a difference between Innovative technology and Emerging technology that lies in their maturity and market adoption. Emerging technology refers to newly developed technologies that are gaining popularity with growing potential but also uncertainty regarding their scalability, adoption, and long-term impact, such as quantum computing and blockchain applications. While Innovative technology has already transformed industries and demonstrated its advantages, such as cloud computing, modern AI applications, and 5G networks.

Furthermore, there is a clear distinction between technology itself and its application in a product or service to create value with a specific focus. Technology, in its essence, is merely a tool - its true significance lies in how effectively it is leveraged to achieve a specific objective, directly addressing a customer problem while also generating business value. Consequently, the real value does not stem from the technology alone but from its ability to enhance the user experience and solve meaningful challenges in a way that benefits both the customer and the business.

Product teams can leverage various technologies to achieve the same goal. The selection of a particular technology should be driven by its effectiveness in solving the problem, alignment with the product vision, and compatibility with the current technical setup, rather than the technology itself. Consequently, no single technology is universally relevant for all companies addressing the same problem. Each organization must evaluate and select the technology that best aligns with its resources, goals, and strategy.

There are numerous debates surrounding the meaning and usage of the term innovation and what truly defines an innovative product. These discussions often spark controversy regarding the disruptive nature of products labeled as innovative, particularly in cases where the application of existing innovative technology is simply extended to a new industry or use case rather than fundamentally transforming the market.

Speaking broadly about innovation in tech products, it can take various forms, each impacting industries and consumers differently. Innovation can manifest as technological advancements, process improvements, business model transformations, or new applications of existing solutions.

One example is an innovative process, such as Lean methodology, which is not directly perceived by the customer in the product itself but provides indirect benefits like improved affordability, faster availability, enhanced functionality, and increased personalization.

Another type of innovation is innovation in usage, such as Netflix shifting from DVD rentals to a streaming platform, fundamentally changing how users consume entertainment. Additionally, there is functional innovation, where a product offers new or enhanced capabilities to improve user experience. An example is the iPhone’s multi-touch screen, which revolutionized smartphone interaction by eliminating the need for physical keyboards and enabling intuitive gesture-based controls.

Business model innovation is another form of innovation, as described by Laurence Lehman-Ortega in Odyssey 3.14 approach. Unlike product or technological innovation, business model innovation transforms an entire industry by challenging the existing ecosystem, introducing new value or experiences, and often disrupting traditional players while creating opportunities for new entrants. A prime example is Uber, which not only redefined urban mobility but also gave rise to the broader economic trend known as "uberization." By leveraging a platform-based model, Uber excluded traditional taxi operators from their historical dominance, reshaped consumer expectations for convenience and pricing, and created new economic dynamics in the gig economy.

Clayton Christensen, Harvard professor and businessman, introduced in 1997 in his book The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail, the concept of Disruptive innovation, describing it as the ability of a small company to challenge established businesses or how certain products and services evolve from serving a niche or small market to becoming widely accepted. Christensen differentiates Disruptive Innovation from Sustaining Innovation, which focuses solely on incrementally improving existing products without fundamentally reshaping the market.

According to Harvard Business Review, Apple's iPhone was an example of Sustaining innovation, as it built upon and enhanced existing smartphone technology rather than disrupting an entire industry from the bottom up.

Whatever the definition of innovation, any company benefits from introducing technological advancements. Emerging companies leverage innovation to break into markets dominated by industry leaders, while established companies use it to stay ahead of disruption and maintain their market leadership.

For established companies, however, shaking up an existing model and transitioning from a stable business to an uncertain one carries significant risk. The comfort of the status quo often leads to internal resistance against adopting novelty. Conversely, young startups struggle to disrupt entrenched ecosystems, secure necessary investments, and gain the trust of partners, making their innovation journey equally challenging yet essential for survival.

For an established company, fostering a culture that supports disruptive innovation is a significant challenge. Product teams are usually focused on managing existing products rather than exploring beyond the current technological setup, which is hardly manageable within a product roadmap. While sustaining innovation fits within traditional product cycles, disruptive innovation requires active leadership support and a culture that encourages bold experimentation.

A key question arises: should an organization integrate innovation into its day-to-day processes, leading to incremental improvements, or should it create a dedicated team or entity, like X by Google, focused on moonshot innovations?

Whatever the chosen option, this approach should be embedded into the company culture, where boldness is encouraged by top management despite the associated risks, while maintaining a clear connection to the company’s mission.

For example, companies that manage innovation through a separate unit risk turning into an expensive proof-of-concept (POC) factory, where ideas are generated but rarely translate into real business opportunities. On the other hand, those that integrate innovation into their existing processes may see their efforts reduced to superficial enhancements or fancy features, rather than achieving true disruptive innovation.

In his 2012 TechCrunch interview with Andrew Keen, Clayton Christensen explores how to escape the innovation dilemma and avoid missing disruptive innovation. He mentions that one possible approach is establishing a separate business unit and cites IBM's response to the PC industry disruption as a successful example: “The way IBM did it was they made the mainframes in Poughkeepsie and went to Rochester, Minnesota, to make the minis. That was a different business model. Gross margins in mainframes were 60 percent. In the minis, gross margins were 45 percent. There were about eight companies that made mini-computers. When the personal computer disrupted the mini-computer, all of the other companies were killed except IBM.”

By creating a separate unit, IBM was able to disrupt its own business and survive industry disruption by accelerating innovation outside internal constraints.

However, Christensen also highlights that some companies successfully navigate the innovation dilemma by managing gradual and continuous innovation without the need for a separate unit. This approach is driven more by customer satisfaction metrics, such as Net Promoter Score (NPS), rather than financial performance indicators like Internal Rate of Return (IRR).

Companies like Apple, Amazon, and Salesforce have demonstrated an ability to integrate innovation within their core business, fostering a culture where innovation is embedded into their organizational DNA rather than being isolated in external units. In this scenario, making money is not the primary goal but rather a natural outcome of delivering continuous value to customers.

The author of the book The Lean Startup, Eric Ries in the article “Creating an Innovation Sandbox” advocates for a temporary, separate team structure to foster innovation within larger organizations. He introduces the concept of a "sandbox for innovation" - a controlled environment where teams can experiment safely without disrupting the core product or business.

The principle is straightforward: any team can launch a split-test experiment, but it must remain contained within the sandbox, meaning it only affects a small, defined part of the product or a limited user segment. These experiments are: run by cross-functional teams, time-bound, typically lasting a few weeks, measured by clear success metrics, independent, but aligned with broader company goals.

This approach promotes an innovation-driven culture by giving product teams the autonomy and space to be creative, proactive, and data-driven. By reducing risk and enabling fast learning, the sandbox model empowers teams to explore bold ideas, iterate quickly, and validate solutions before scaling them across the organization.

Having a comfortable budget for innovation, like Google’s X, is a luxury reserved for companies with strong revenue streams, allowing them to balance R&D investments with ongoing business needs. However, frugality can also be a powerful driver of breakthrough innovation. A striking example is the Ukrainian army’s use of 3D-printed military drones, which disrupted traditional perceptions of expensive military equipment by demonstrating how low-cost, rapid prototyping can create strategic advantages. This example underscores how resource constraints can lead to creative problem-solving and cost-effective innovations even in high-stakes situations.

Conversely, having a large budget does not guarantee success. Companies like Quibi, a short-form streaming platform, failed to capture an audience despite securing massive financial backing from industry leaders. This highlights that innovative success is not solely dependent on funding - it also requires market fit, user adoption, and strategic execution.

Moreover, introducing technological innovation into the market, particularly disruptive innovation, requires strategic communication to drive behavioural change both internally and externally. For internal transformation, product evangelization is a crucial responsibility of product teams, especially in technological innovation. For market adoption, a well-structured go-to-market strategy should be deployed - one that goes beyond conventional approaches and reframes well-known concepts in a new way. This transition involves training internal teams, such as sales and customer support, as well as educating external stakeholders, including partners, clients, investors, business analysts, and influencers.

Additionally, legal frameworks often need to evolve alongside technological advancements. The classic "chicken-and-egg" debate - should legislation adapt first, or should market changes drive legal updates? - suggests that both should progress simultaneously. While regulatory bodies oversee existing technologies and anticipated use cases, they are not inventors. In many cases, regulators and market pioneers must advance together, shaping policies that enable innovation while addressing risks and ethical concerns

​​

​

Common Pitfalls and Their Associated Cognitive Biases

1. Overvaluing Technology

Pro-Innovation - Overestimating the benefits of innovation while underestimating its limitations or risks.

Appeal to Novelty - The belief that newer ideas, products, or innovations are inherently better or more valuable than older ones, regardless of evidence.

Bandwagon Effect - ​Adopting beliefs, behaviors, or trends simply because they are popular or widely accepted by others.

2. Creativity Challenges

Functional Fixedness - Struggling to see objects or tools outside of their conventional use, limiting creativity and problem-solving.

Einstellung Effect - when we approach a problem with a mindset that worked for us in the past, even if a more efficient solution exists

Status Quo - Favoring current conditions over change, even when alternatives may offer significant benefits.

3. Misunderstanding Customer 

Overconfidence Bias - the tendency to overestimate one’s abilities or accuracy, leading to overly optimistic judgments and flawed decision-making

Confirmation Bias - People seek information that confirms their existing beliefs and ignore conflicting evidence.

Self-Consistency (Commitment bias) - The tendency to perceive one’s past attitudes and behaviors as consistent with current ones, even if they have changed.

Action bias - people tend to favor action over inaction, even when there is no indication that doing so would point towards a better result.

4. "Big problem, small market."

Survivorship Bias - Focusing on successful outcomes or entities while overlooking those that failed, leading to skewed conclusions and an incomplete understanding of the factors contributing to success or failure.

Availability Heuristic - Where individuals rely on immediate examples that come to mind when evaluating a decision, event, or likelihood of something happening.

Base Rate Fallacy - Where people ignore or undervalue the base rate (general statistical probability) of an event in favor of specific, anecdotal, or recent information.

Sunk Cost Fallacy - individuals continue investing in a project, decision, or activity based on the time, money, or resources they have already spent, rather than evaluating its current and future value.

Addressing Human Biases vs. Algorithmic Biases

Algorithmic Bias Due to Biased Training Data

Algorithmic Bias Due to Imbalanced Representation

Algorithmic Bias Due to Wrong Focus or Features Selection Bias

Algorithmic Bias Due to Reinforcement of Prejudices in AI Systems

Algorithmic Bias Due to Context Transfer

Stereotyping bias - The tendency to rely on oversimplified or generalized beliefs about groups when making judgments.

Availability Heuristic - Where individuals rely on immediate examples that come to mind when evaluating a decision, event, or likelihood of something happening.

Group Attribution Error -

Base Rate Fallacy - Where people ignore or undervalue the base rate (general statistical probability) of an event in favor of specific, anecdotal, or recent information.

Bandwagon Effect - â€‹Adopting beliefs, behaviors, or trends simply because they are popular or widely accepted by others.

False Consensus Effect - where individuals assume their beliefs, behaviors, and traits are more common and widely shared than they actually are.

1. Overvaluing Technology

The first pitfall is the tendency of technical teams to overestimate the value of new technology in solving customer needs better than existing, less technologically complex solutions. Two key biases are at play here: Pro-Innovation bias and Appeal to Novelty bias. Both biases can lead to adopting new technologies without fully considering their impact, feasibility, or alignment with user needs. 

Appeal to Novelty bias refers to the tendency to favor ideas, products, or technologies simply because they are new or perceived as innovative, rather than based on their actual value or effectiveness. In our modern society, people are often attracted to newness and eager to be on the "cutting edge" of technology. They believe that new things must be better, even without solid evidence to support that claim. Pro-Innovation bias, on the other hand, is the tendency to view innovation positively and believe that new ideas or technologies automatically result in progress. 

The difference between these two biases is subtle. Pro-Innovation bias involves the unquestioning acceptance of any new idea or technology, whereas Appeal to Novelty bias is more about the fascination with newness itself. An example of Pro-Innovation bias can be seen with generative AI technology, where, even in the face of potential dangers or ethical concerns, there is a strong, widespread belief in its positive societal impact. 

The risks associated with these biases are significant. Premature adoption of technology without proper market research, exploring different assumptions, or conducting cost-benefit analyses can lead to the development of proof of concept (POC) products that never find a market. This results in a waste of resources, time, and effort. Large organizations may heavily invest in creating a ‘POC factory’ with innovations that can hardly find their market fit, consuming valuable resources without delivering a return. 

Additionally, if these biases are poorly managed by top management, they can lead to internal conflicts between teams. On one side, there are teams focused on solving day-to-day customer problems and pursuing incremental innovation with limited resources. On the other, there are teams working on radical, high-risk innovations with larger budgets and lower immediate expectations from management. This imbalance can create tensions, as operational teams may feel constrained by short-term pressures, while innovation teams operate with greater flexibility, often without the same urgency to deliver immediate results. Effective leadership is crucial to balancing both approaches, ensuring that neither short-term problem-solving nor long-term disruptive innovation is undervalued. 

Some technologies are not mature enough to deliver tangible benefits immediately and often take longer to become ubiquitous. A good example of this is the RFID tag, which was originally invented during World War II by Leon Theremin for a listening device known as The Thing, activated by radio waves beamed at the US embassy by the Soviets. Later, the technology was adapted to improve inventory management and patented in the 1970s. However, RFID technology was largely underutilized for nearly 50 years due to high costs, the lack of universal standards (no consistent protocol), and security concerns. 

Today, RFID adoption has gained momentum due to improvements in cost efficiency, better standardization, and the growing need for real-time inventory visibility. It is now widely used by major retailers such as Walmart, Target, Tesco, and Decathlon for inventory tracking, showcasing how technology has finally reached the point of mass adoption as it matured and became more affordable and reliable. 

The Bandwagon Effect is also a major driver of the increased adoption of new technologies. It occurs when people adopt beliefs, behaviours, or trends solely because they are popular or widely accepted by others. While a new technology can certainly be helpful, its use in a product requires first meeting Product-Market Fit by solving a real customer problem, offering a solution that is competitive enough to justify resource allocation, and being aligned with the product vision and strategy. 

An example of the Bandwagon Effect in innovation is the rise of Web 3.0, which gained significant momentum around 2014 with the emergence of blockchain and decentralized technologies. The trend peaked in 2020 with the hype surrounding the Metaverse, leading even Facebook to rebrand as Meta and fueling the widespread adoption of NFTs. However, as companies and investors rushed to align with the trend, scalability issues surfaced, and the tangible benefits for customers remained unclear. This illustrates how the Bandwagon Effect can drive widespread adoption of emerging technologies before their real-world viability and long-term value are fully validated. While there are some specific use cases in industries like gaming for the Metaverse and cryptocurrencies for blockchain, the broader industry adoption has been slower than anticipated, highlighting the challenges of transitioning from novelty to practical value.

Use Case - Samsung

In August 2013, Samsung’s semiconductor division launched a crowdsourcing campaign to explore potential applications for its Flexible Display technology and invited participants to propose business plans. The company offered a $10,000 reward for the winner, along with smaller prizes for second and third places. Samsung framed the challenge as: "How would you change people’s lives with Samsung Flexible Display technology?" 

Unlike internal innovation teams, which typically work within predefined constraints such as cost and development timelines, this campaign focused solely on identifying valuable use cases and understanding how much people would be willing to pay for them. 

Addressing Human Biases vs. Algorithmic Biases

When it comes to introducing technological innovation, products integrating generative AI technology are considered highly promising and are increasingly explored. However, they also face criticism due to algorithmic bias.

Algorithmic bias occurs when decision-making processes are delegated to algorithms that have been designed by humans and, as a result, can amplify human cognitive biases rather than eliminating them.

Human Cognitive Bias is systematic errors in judgment and decision-making that arise from psychological tendencies shaped by evolution, experiences, and social norms. In contrast, Algorithmic Bias refers to systematic errors in AI and machine learning models that emerge from data, assumptions, and design choices made during the AI learning process and model development, often leading to unfair or discriminatory outcomes. Algorithmic bias arises unintentionally and is difficult to detect unless it triggers a sensitive issue that prompts the product team to investigate the source of errors.

artificiel intelligence.jpg

Both human cognitive bias and Algorithmic bias rely on unintentional shortcuts in decision making. In some cases, Algorithmic biases are influenced by human cognitive shortcuts, such as stereotyping or selective focus. Understanding how these two types of biases impact AI decision-making is crucial for developing fair and ethical AI-based systems. 

The primary challenge with Algorithmic bias lies in the delegation of decision-making and prediction processes to algorithms. Since AI models are trained and configured by a product team, they can accumulate two types of bias: human bias, which arises from the team's decisions on how and why to integrate AI technology, and algorithmic bias, which results from biased data and flawed assumptions within the model itself. Recognizing and mitigating both types of bias is essential to ensuring AI-driven decisions remain fair, transparent, and equitable. 

When humans process information, they can question existing knowledge, develop firm opinions shaped by their education and values, and selectively retain or disregard certain details. In contrast, algorithms lack personal judgment or subjective filtering - they process and store all integrated data indiscriminately, much like a child absorbing knowledge. However, unlike humans, who have cognitive and memory limitations, algorithms can continuously access and analyze large datasets without forgetting any details.

The most concerning aspect of algorithmic bias is its scalability, which can amplify and reinforce systematic deviations across large datasets. While human biases can vary from one individual to another, influenced by personal experience, education, and psychology, algorithmic bias can create a uniform, systemic shortcut that affects all users interacting with a biased system. 

It is incorrect to assume that algorithms are inherently more trustworthy and reliable than human decision-making, but dismissing their advantages would also be a mistake. While both humans and AI rely on available information for decision-making, they process it in fundamentally different ways, making algorithms more effective for tasks requiring precision and computation (such as calculus), while humans excel in areas that demand ethical reasoning, intuition, and contextual understanding. The fundamental difference is that while algorithms execute requests, analyze data, and generate responses, the reason for performing these tasks is always initiated by humans. This highlights that bias is a challenge in both human and machine-based decision-making. 

The integration and widespread adoption of generative AI represent a major transformation in product development, bringing ethical challenges related to the extensive delegation of decision-making to machines and increasing legal accountability under evolving regulations. As LLM technology continues to expand, product teams must recognize algorithmic bias, understand its potential consequences, and implement strategies to mitigate it to ensure AI systems are fair, responsible, and compliant with legal standards. 

When human biases stem from emotional experiences, personal beliefs, and psychological tendencies, Algorithmic biases originate from data limitations and flawed model assumptions, which can lead to large-scale discriminatory outcomes, making them particularly concerning. 

There are several sources of algorithmic bias that can affect AI decision-making.

​Today, AI systems influence nearly every aspect of daily life, from restaurant recommendations to facial recognition technologies. However, both technical and ethical concerns remain unresolved, particularly regarding bias and fairness. Several key issues arise from the biases mentioned above: 

​

  • Inconclusive Evidence – AI often establishes correlations rather than causal relationships, leading to flawed decision-making. 

  • Lack of Transparency and Accountability – Many AI systems operate as "black box" models, making their decision processes inscrutable and difficult to audit. 

  • Biased and Discriminatory Evidence – Facial recognition technology has been widely criticized for racial and gender biases, leading to higher misidentification rates among minority groups. 

  • Unfair Outcomes and Discrimination – AI models, such as Amazon’s recruitment tool, have unintentionally reinforced gender bias, favoring male applicants due to historically male-dominated hiring patterns. 

  • Transformative Effects on Society – Algorithms reshape human perception and behaviour, as seen with YouTube’s recommendation system, which amplifies misinformation and ideological polarization. 

  • Traceability and Accountability – The legal and ethical responsibilities of AI decision making remain unclear, creating challenges in determining who is responsible when an AI system causes harm or discrimination. 

​

As AI continues to play a larger role in decision-making, addressing context transfer bias and other algorithmic biases is essential for ensuring fairness, transparency, and accountability in AI-based systems. Both human and algorithmic biases must be addressed to promote ethical and fair decision making. While algorithmic bias can often be minimized through careful design and monitoring, product teams must ensure that AI systems do not reinforce or amplify human cognitive biases, preventing the scaling of discrimination and unethical outcomes. Regarding AI regulation, we are still in the early stages of defining and implementing comprehensive legal frameworks, and enforcement remains a significant challenge. The European Union has introduced the AI Act, which imposes compliance requirements and mandates robust risk management, governance, and human oversight to regulate AI systems effectively.

​In contrast, the United States, despite being a global leader in AI, lacks a federal AI regulation. However, some states, such as New York, Illinois, and Utah, have implemented localized controls and accountability measures. 

China, another major AI powerhouse, introduced national-level AI legislation and standards in 2024, establishing a centralized regulatory framework for AI governance. Meanwhile, Japan90 took an earlier approach, publishing its "Social Principles of Human-Centered AI" in 2019, which emphasize respect for human dignity, sustainability, and individual well-being in AI development. 

Other countries, including Brazil and Malaysia, have also made progress in regulating AI, taking initial steps toward ensuring ethical and responsible AI deployment. As AI technology continues to advance, the challenge for global policymakers will be to balance innovation with regulation, ensuring AI systems remain fair, transparent, and accountable. 

The primary challenge in mitigating Algorithmic bias is so called Proxy Problem91, which occurs when algorithms rely on seemingly neutral attributes that inadvertently correlate with sensitive characteristics such as race or gender. Even when explicit discrimination is removed, these proxies can still reinforce bias, leading to unfair outcomes. 

Algorithmic bias is widely discussed because it reflects and amplifies existing social inequalities, rather than eliminating them. In her 2020 article, "Algorithmic Bias: On the Implicit Biases of Social Technology," Gabbrielle M. Johnson explores the relationship between Algorithmic and human bias, arguing that both are inherent to decision-making processes. Her work highlights the urgent need for ethical frameworks to address the risks posed by biased AI systems and to ensure more equitable and accountable AI-driven decisions. 

Despite the presence of cognitive biases and human limitations, human involvement remains a necessary step in the evolution of algorithms. This is why the concept of Human-in-the-Loop (HITL) has been introduced in artificial intelligence (AI) and machine learning (ML). HITL ensures that human intervention is incorporated into the AI decision-making process to improve accuracy, reduce errors and biases, and ensure ethical usage of AI systems.

1. Algorithmic Bias Due to Biased Training Data

Also known as ‘garbage in, garbage out’, this algorithmic bias occurs when an AI model is trained on historically biased data, causing it to replicate and amplify those biases. In such cases, achieving a neutral or fair model is nearly impossible. Additionally, identifying these biases can be challenging and time-consuming, particularly because training data is often not publicly accessible, which makes it more difficult to assess and rectify underlying flaws. 

A well-known example is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an AI system used in the U.S. judicial system to predict the likelihood of recidivism (reoffending). In 2016, ProPublica investigated COMPAS and found that its algorithm exhibited racial bias. The model disproportionately classified black defendants as ‘high risk’ for recidivism while labeling white defendants as ‘low risk’, even when their criminal records were statistically similar. This case demonstrates how bias in training data can reinforce systemic discrimination in high-stakes applications like criminal justice. This type of algorithmic bias is often driven by Stereotyping bias, where AI generalizes characteristics based on group membership rather than evaluating individuals on their own merits. While some stereotypes may have practical applications, such as age-based training programs for children or minimum age requirements for driving, they are often harmful in most situations, as they reinforce cognitive shortcuts, leading to oversimplifications and discriminatory patterns.

Overcoming Cognitive Bias when Introducing Technological Innovation

  • Customer Relevance and Market Fit

- Does our innovation solve a real customer problem? Is there a market for it? 

Any product needs to meet its Product-Market Fit (PMF). Regularly validate assumptions with real customers, remain open to feedback, and test assumptions before progressing with development. 

- What was the sample size for our research? Is it representative?

Review the customer sample size used in testing and ensure it is representative of the broader market. Also, track if people dropped out of studies and how long they interacted with the product. 

- Have we gathered all types of feedback, or only positive feedback? 

Encourage diverse perspectives by allowing feedback that challenges assumptions. Having critical voices in the conversation opens the door to new insights and innovative ideas.

 

  • Strategic Assessment and Feasibility

- Are the benefits from competitors enough to justify engaging internal resources? 

Conduct a Cost-Benefit Analysis to assess whether the competitive advantage gained from adopting the new technology is worth the resources it will consume. 

- Have we explored the risks of integrating or not integrating this technology, both in the short and long term? 

Some technologies may not be mature enough to provide immediate benefits or may take longer to scale, so evaluating the risks of missing the opportunity is essential. 

- Have we challenged our business model in light of potential future changes? Is it sustainable enough? 

Evaluate if the business model will remain viable with the adoption of the new technology and how future market shifts could impact it.

​

  • Execution, Scalability, and Innovation Process

- Is this a potentially scalable solution, or could it become scalable? 

Test the business potential by considering if the solution can scale efficiently as demand grows and whether it can be adapted to new markets or customer segments. 

- Have we set clear metrics to ensure we remain focused on the right priorities? 

Establishing measurable success criteria helps teams stay focused on achieving the most important outcomes and avoid getting sidetracked by irrelevant details. 

- Can hackathons or crowdsourcing serve as effective sources for idea generation? 

Stepping back from a problem can often stimulate the creative process, and one of the best ways to achieve this is by involving teams that are not directly engaged in the project, bringing fresh perspectives and novel ideas.

​

  • Algorithmic Integrity and Fairness 

- Is the use of AI justified, and if so, have we ensured transparency and regulatory compliance? 

One of the most effective ways to mitigate algorithmic bias is through comprehensive audits to identify and address potential biases in AI models.

design of digital product used by customers.jpg

From Customer Experience to Product Design

You’ve got to start with the customer experience and work back toward the technology, not the other way around.

Steve Jobs

Product Design and Customer Experience (CX) are two crucial components within the Product Development process. Often referred to as customer-centric design, the idea is that Product Design should always start with the customer. This approach is especially important in the technology industry, where technology can quickly become the focal point of everything.  

A product becomes valuable not by its set of features but primarily by how easily it can be used. This usability can significantly influence emotional attachment or habitual use, all made possible through strong product design. However, this is only achievable when Customer Experience is prioritized from the very beginning. 

Furthermore, Product Design is a holistic approach that integrates Customer Experience (CX) and User Experience (UX). Customer Experience (CX) encompasses the entire journey a customer has with your product, from awareness through to post-purchase interactions. It includes both direct interactions (such as purchasing and using the product) and indirect interactions (such as advertising, word-of-mouth, or customer support). On the other hand, User Experience (UX) focuses on the interaction between the user and the product or service during the usage phase, encompassing emotions, perceptions, and behaviours that occur at that time. 

It’s important to note that customers and users are not always the same individual. In B2B contexts, for example, the purchasing department (the buyer) is often different from the end user of the product. Similarly, in B2C markets, products designed for children may involve parents as the buyers, even though children are the users. 

As previously mentioned, product design is a holistic approach, which is why Product Design should be at the center of all interactions with the product. It ensures that both buyers and users experience the intended emotional journey, aligning the product’s design with their needs and expectations, and providing a seamless, consistent experience throughout the entire customer journey. 

Both Product Design and Customer Experience (CX) should stay aligned with the Product Vision and focus on delivering outcomes (solving customer problems and delivering tangible value) rather than outputs (features and functionalities). The distinction between outcomes and outputs is crucial for ensuring the product meets customer needs effectively.

Common Pitfalls and Their Associated Cognitive Biases

Design Fixation

Einstellung Effect - when we approach a problem with a mindset that worked for us in the past, even if a more efficient solution exists

Familiarity Effect - people tend to favour things that are familiar to them, while showing less interest in things that are unfamiliar.

Functional fixedness - Struggling to see objects or tools outside of their conventional use, limiting creativity and problem-solving.

Status Quo bias - Favoring current conditions over change, even when alternatives may offer significant benefits.

"You are not the user"

False-Consensus Effect - a cognitive bias where individuals assume their beliefs, behaviors, and traits are more common and widely shared than they actually are.

Availability Heuristic - where individuals rely on immediate examples that come to mind when evaluating a decision, event, or likelihood of something happening.

​Overconfidence Bias - the tendency to overestimate one’s abilities or accuracy, leading to overly optimistic judgments and flawed decision-making

Leveraging Cognitive Biases in Product Design

Helping Users Make Decisions

Choice Overload

​

Enhancing Product Perceived Value

Labor Illusion Bias -

Framing Effect - 

Von Restorff Bias

Isolation Effect

Influencing Social Principles

Social-Desirability bias

Bandwagon Effect Bias, 

Reciprocity Bias, 

Halo Effect, 

Familiarity Effect, 

Scarcity bias, 

Authority bias, 

Commitment Bias

1. Design Fixation 

The first category of pitfalls that may arise during the design process is linked to design fixation92, where product designers become so focused on a particular problem that they struggle to identify appropriate solutions. Biases that contribute to fixation reduce the ability to “think outside the box” and hinder the creative process. 

A key bias in this context is the Einstellung Effect (from the German word Einstellung, meaning "attitude" or "setting"), which occurs when individuals approach a problem using the same mindset that worked for them in the past, even when a more effective solution exists. This bias is especially common when product designers apply complex heuristics - methods used to solve difficult problems - to simpler problems where a more straightforward approach would be more effective. As a result, the creative ability necessary for problem-solving is diminished. The more experienced a product designer is, the more their experience can act as a shortcut by relying on familiar patterns. While the expertise of product designers is a valuable asset, it can be poorly used when solving complex problems. In such cases, the expert may focus more on familiar patterns and what they already know (Familiarity Effect) rather than considering unfamiliar or new elements, leading them to make decisions based solely on these known solutions. Additionally, experienced designers may rely on knowledge they can easily recall and frequently use (Availability Heuristic), which can prevent them from thinking creatively and exploring alternative approaches. 

Another bias that affects creativity is Functional fixedness, where product designers struggle to envision alternative uses or experiences beyond traditional applications. Research suggests that the more experience an individual has with a particular problem-solving method, the more susceptible they are to functional fixedness, making it harder for them to recognize alternative solutions outside of their familiar framework. 

Status Quo bias further reinforces functional fixedness, as employees, seeking conformity and stability, tend to default to established ways of doing things rather than exploring innovative alternatives. This resistance to change can inhibit creativity, prevent the adoption of new methodologies, and limit the innovation potential in product design. 

Guillaume Gourbeix, co-founder of L’Atelier Universel, shares that applying a proven pattern - like those used within the same category of products - can be highly effective for user experience design, as these patterns have already been tested and demonstrated their efficacy. When a client wants to develop a completely new type of design and user experience, the process typically requires more time and effort, especially during the validation phase. Additionally, using familiar interface patterns helps minimize the transformation effort associated.

Overcoming Cognitive Bias in Product Design

Mitigating mentioned biases requires structured product discovery, user research, and continuous validation. By incorporating usability testing, diverse user feedback, and iterative design processes, teams can challenge assumptions, broaden perspectives, and create more user-centered solutions that truly meet the needs of their audience. There is a structured set of questions to help ensure product teams minimize cognitive bias in decision-making. 

​

  • Grounding Design in Real User Insight - Are we actively practicing self-questioning, challenging assumptions, and conducting usability tests to validate our design decisions? 

- Have we gathered diverse and representative feedback from real users and customers to refine the product based on their expectations? 

- Have we tested the product with actual users and customers to validate usability, desirability? 

- Have we examined our design choices for biases, ensuring they are inclusive? 

Would our product still work effectively if our target audience changed (e.g., from men to women)?

​

  • Framing the Problem & Seeking Outside Inspiration 

- Have we avoided overcomplicating the solution by focusing on core problems and eliminating unnecessary details to enhance creativity? 

- Have we explored solutions from unrelated industries to inspire innovative approaches in our own field? 

​

  • Building a Culture That Supports Innovation 

- Is our organization cultivating an environment that embraces new ideas, diverse perspectives, and alternative problem-solving methods? 

- Have we taken a step back from the problem (incubation period) to gain fresh insights and new perspectives?

Leveraging Cognitive Biases in Product Design

When it comes to UX Design, cognitive biases do not only affect the product team - users themselves are also influenced by biases throughout their purchasing and usage journey. Product teams should be aware of these biases to ensure that their product design aligns with user expectations, enhances decision-making, and ultimately improves customer satisfaction. Customer experience is one of the key product stages where leveraging user cognitive biases can optimize interactions, streamline navigation, and create more intuitive designs. Understanding how users think and make decisions allows UX designers to guide their actions naturally, reducing friction and enhancing engagement. To better illustrate the impact of cognitive biases on users, we have grouped these biases into three categories, each reflecting how they shape user behaviour and influence interactions with digital products.

new ideas, light bulb.jpg
  • Helping Users Make Decisions 

Choice Overload is a cognitive bias that leads to choice paralysis, making decision-making difficult when users are presented with too many options. When overwhelmed by an excess of choices, users may delay their decision, abandon the process altogether, or default to a conservative selection that may not best suit their needs, often leading to frustration and dissatisfaction. 

To counteract choice overload, product teams should structure choices in a way that simplifies decision-making. Rather than presenting an extensive list of features or options, it is more effective to organize them into categories based on the "job they do," the value they provide, or the specific audience they target. This approach allows users to quickly grasp the purpose of each option, making navigation and decision-making more intuitive. 

A well-designed user experience can further support decision-making by guiding users step by step through the selection process. For example, some websites use interactive questionnaires that prompt users with key questions about their needs before recommending a curated set of relevant options. This method reduces cognitive load, prevents users from feeling overwhelmed, and ensures they are presented with the most suitable choices.

​

  • Enhancing Product Perceived Value 

Labor Illusion Bias occurs when customers perceive a product or service as more valuable when they believe that significant effort or labor has been involved in its creation. People appreciate knowing that complex systems are working for them, which enhances their perception of quality and efficiency. Product designers can leverage this bias by making certain processes visible to users, creating a sense of effort and sophistication behind the service. 

For example, ride-hailing apps display the driver’s location and estimated arrival time while the customer waits, reinforcing the idea that the system is actively working in real time. Similarly, data retrieval animations, such as a loading circle with a message like “Retrieving your data”, give users the impression that a system is performing an in-depth analysis. Kayak enhances this effect by displaying the various travel websites it scans to find the best deals, and Tinder shows an intermediate screen stating “Finding potential matches”, reinforcing the effort behind the service. Some applications even intentionally extend wait times slightly to create the illusion of processing a large volume of data, further strengthening the perception of labor and accuracy. 

Another cognitive bias that influences product perception is the Framing Effect, where the way information is presented has a greater impact on decision-making than the content itself. For instance, using positive framing, such as "Save 20%" instead of "Pay 80%", makes the offer more appealing. Similarly, visually highlighting price reductions - striking through the old price and prominently displaying the new discounted price - enhances the perceived value more effectively than simply showing the updated price.

Von Restorff Bias, also known as the Isolation Effect, occurs when a distinct element stands out from a group of similar items, drawing disproportionate attention. The human brain naturally focuses on the unusual, making this a powerful tool for guiding user attention. Designers can apply this bias by bolding key words in text to ensure readers capture the most important messages. Similarly, call-to-action buttons can be made more visually distinct by using contrasting colors, sizes, or placements, making them stand out from other interface elements and increasing interaction rates.

​

  • Influencing Social Principles in Product Design 

In his book Influence: The Psychology of Persuasion, Dr. Robert Cialdini describes six principles of influence, derived from experiments conducted with students and widely applied in product design. Interestingly, most participants claimed they always made independent decisions and were not influenced by others. However, experimental results proved the opposite. This highlights why it is essential to test, experiment, and observe user behaviour rather than relying solely on what users say, as Social-Desirability bias often skews self-reported statement. 

This bias reflects people's tendency to present themselves in a socially desirable manner, exaggerating positive behaviours while minimizing undesirable traits. 

We applied Cialdini’s Six Principles of Persuasion to Product Design and identified relevant cognitive biases that shape how users perceive and interact with products. These principles, originally developed in the context of marketing and psychology, can be leveraged to enhance user experience, improve engagement, and drive conversions by aligning with users' natural decision-making processes.

Social Proof Principle 

Social proof could be explained by the Bandwagon Effect Bias and can be leveraged to encourage customers to take action by highlighting what others are doing or thinking. People tend to follow the behaviours and choices of others when making decisions, as social influence plays a significant role in shaping their perceptions. This is why many modern websites incorporate product ratings, reviews, and customer testimonials, allowing users to see how others have engaged with a product or service. This validation process helps users feel more confident in their choices. 

In physical retail environments, businesses also use social desirability cues to attract more customers. For example, some stores intentionally create visible waiting lines at their entrances to give the impression of high demand, reinforcing the perception that their offerings are desirable and worth trying. 

- Reciprocity Principle

​The Reciprocity Principle is linked to Reciprocity Bias, which occurs when users feel compelled to return a favor or good deed. When people receive something valuable for free, they often feel an obligation to give back in some way, whether through engagement, providing personal information, or making a purchase. A common example is when businesses offer free resources, such as studies, white papers, or exclusive reports, in exchange for a user's email, phone number, or company details. This sense of mutual exchange increases the likelihood of user commitment and conversions.

- Liking principle 

The Liking principle suggests that people are more likely to say "yes" to individuals and organizations they know and like. This applies not only to interpersonal relationships but also to websites, brands, and user interfaces. Psychological studies have attempted to determine the reasons behind the Liking Effect and have identified several contributing factors, including similarity (we like people who resemble us), familiarity (repeated positive interactions build trust), cooperation (we appreciate those who help us), association (we are drawn to people who share our values), and praise (we tend to like those who compliment us). At least two key cognitive biases could explain this effect:

Halo Effect happens when a positive impression in one area (such as attractive design, branding, or a well-known spokesperson) influences perceptions of unrelated qualities, making users more likely to trust and engage with the product. Some companies showcase their highly skilled teams on their website to reinforce trust by creating a human connection between users and the product or service. 

Familiarity Effect occurs when people are naturally more drawn to things they already recognize while displaying less interest in unfamiliar elements. This is why some websites adopt a localized cultural design, tailoring their aesthetics to match users’ expectations and comfort levels. What is perceived as familiar and visually appealing varies significantly across cultures. For example, Chinese and Japanese websites tend to feature vibrant colors and dynamic layouts, whereas Western European websites typically favor a minimalist and structured design approach, reflecting different user preferences and cultural aesthetics. The Liking Principle, driven by the Halo Effect and Familiarity Effect, is particularly effective on welcome pages of applications and websites. A well-designed landing page creates a strong first impression, which can shape a user’s overall perception of the application. Unlike Reciprocity Bias, which has an immediate effect, the Liking Principle has long-term benefits, as users who enjoy an aesthetically pleasing interface are more likely to develop positive associations with the product. This leads to the conclusion that people are highly influenced by aesthetics and often form opinions based on a few key visual elements rather than thoroughly evaluating all available information.

​- Scarcity principle 

Scarcity bias tightly linked to Scarcity principle and occurs when people assign higher value to things that are scarce or difficult to obtain. The perception of limited availability creates a sense of urgency, making a product or service seem more desirable. There are three main forms of scarcity. 

• Time-limited scarcity pushes users to act before a deadline, creating urgency to purchase, as seen in phrases like “limited-time offer”, “last chance”, or “sale ends soon”. 

• Quantity-limited scarcity increases demand by emphasizing limited stock or high popularity, using phrases such as “almost gone”, “rare find”, or “in high demand”. 

• Access-limited scarcity restricts availability to exclusive groups, making access feel more valuable, as in “Reserved solely for members” or “welcome to Tinder Select”. 

Scarcity bias is closely linked to loss aversion, where people place greater subjective value on avoiding loss than on gaining something of equal worth. This psychological tendency drives quicker decision-making, as individuals are more likely to act immediately to avoid missing out on a perceived exclusive opportunity.

- Authority principle 

Authority principle based on Authority bias and occurs when people overvalue the judgment of an authority figure, often at the expense of their own opinion. This bias leads individuals to place excessive trust in endorsements from experts such as scientists, doctors, industry leaders, lawyers, or law enforcement officials. In marketing, authority bias is commonly leveraged through statements like “clinically proven” or “9 out of 10 dentists recommend”, which use expert credibility to influence consumer decisions.

- Commitment and Consistency Principle 

Also known as Commitment Bias, Escalation of Commitment, or Self-Consistency Bias, describes our tendency to remain committed to past behaviours, particularly those exhibited publicly, even when they do not lead to desirable outcomes. Once people commit to something, they are more likely to continue following through to maintain self-consistency. 

Encouraging commitment can be as simple as prompting users to create an account, sign up for a loyalty card, subscribe to social media, or start a free trial. For example, the Fitbit mobile app asks users to set personal fitness goals. This initial commitment translates into action, as users track their progress through dashboards and receive push notifications that keep them engaged. 

Some companies leverage this principle to increase questionnaire completion rates by adding prompts midway, such as “Continue answering”, reinforcing the user's initial engagement and encouraging them to follow through to the end. By applying Commitment and Consistency Bias, businesses can enhance user retention, engagement, and long-term loyalty. 

By recognizing and accounting for these biases, product teams can design experiences that feel more natural, fluid, and satisfying, ultimately leading to higher retention, engagement, and conversions.

​

bottom of page