Dark Side of AI that No One Talk's about - Tricks and Manipulations to Exploit Human Behaviour



Some of the world’s biggest tech firms have soared in value over the last year. Investment 

is simply a bet that AI increases profitability for the firms involved. These massive valuations 

are bets that AI will hugely increase future profitability. In some cases these are bets that 

AI will improve in capabilities towards some kind of  “artificial superintelligence” capable 

of performing everything a human can – or even more. This could raise the living standards 

of everyone on Earth.


If investors begin to fear that AI profits won’t materialize they will attempt to reclaim 

their investments. This realization can appear quite suddenly and can be triggered by 

seemingly trivial events. It doesn’t require a big needle to pop a bubble.


AI companies more generally do not appear to be profitable right now. Investors are not 

putting their money into today’s losses – they are betting on an AI future.


However, the big four – Meta, Alphabet, Microsoft and Amazon – are this year spending the 

massive amount of Money on AI infrastructure. This is not investment in new 

targeted ads, it is investment in an AI future. The bubble will burst if and when this 

future is in doubt. Meanwhile, there is a dark side of AI that needs to be put in the 

spotlight.


It is no exaggeration to say that popular platforms with loyal users, like Google and 

Facebook, know those users better than their families and friends do. Many firms collect an 

enormous amount of data as an input for their artificial intelligence algorithms. Facebook 

Likes, for example, can be used to predict with a high degree of accuracy various 

characteristics of Facebook users: “sexual orientation, ethnicity, religious and political 

views, personality traits, intelligence, happiness, use of addictive substances, parental 

separation, age, and gender,” according to one study


If proprietary AI algorithms can determine these from the use of something as simple as the 

‘like’ button, imagine what information is extracted from search keywords, online clicks, 

posts and reviews.


Giving comprehensive AI algorithms a central role in the digital lives of individuals 

carries risks. For example, the use of AI in the workplace may bring benefits for firm 

productivity, but can also be associated with lower quality jobs for workers. Algorithmic 

decision-making may incorporate biases that can lead to discrimination (eg in hiring 

decisions, in access to bank loans, in health care, in housing and other areas).


Manipulative marketing strategies have existed for long time. However, these strategies in 

combination with collection of enormous amounts of data for AI algorithmic systems have far 

expanded the capabilities of what firms can do to drive users to choices and behaviour that 

ensures higher profitability. Digital firms can shape the framework and control the timing 

of their offers, and can target users at the individual level with manipulative strategies 

that are much more effective and difficult to detect.


What we get out of machine learning and AI is data. What one wants to figure out is the 

minimum amount of data that one needs to know about someone in order to be able to predict 

how they’re going to behave both in the short term and long term. Where influence plays into

that is ones wants to use techniques that are well-understood about people in order to get 

them to not only take a particular action, but to adopt the goals one has for them; once 

they do that, they will do whatever one needs them to do to accomplish those goals.


Moreover, AI has the capability to understand the social behaviour of human and 

manipulate to make you click on the product of the advertiser as Dr. Charles Isabelle explains 

this by taking a video game developer as an example. The developer gives you 3 doors in

front of you in a mystery game where walking through each door would convince you that you 

explore more around the game but in reality you would reach the same end regardless of which

 door you choose, here the developer gives you the illusion of choice. He also suggests that 

if you show that some products are limited, the customer will decide for themselves that it is valuable. 

This, he says, is one of the best ways to ensure that the customer feels in control of their 

engagement with you. It’s possible that AI will be able to use scarcity as one of the many 

tools in its repertoire when creating advertisements and copy. 


AI in the workplace: what’s at stake?

Whilst AI in the workplace can be helpful, it also has a significant irreversible damaging 

side effects that harms workers. The main problem is a practice called Algorithmic 

Management (AM), where AI software assigns tasks, monitors performance, and manages workers 

without human input.


The core issue is that these AI systems are designed to maximize efficiency and profit, but 

they do this by stripping away workers' autonomy and control over their jobs. For example, 

in warehouses, algorithms minimize break times, and in retail, they create unpredictable 

schedules. This mirrors old "scientific management" theories from the early 1900s that led 

to terrible working conditions and high staff turnover.


Lack of job control is not just stressful; decades of research link

it to serious long-term health problems, including heart disease and mental health issues. 

There is a real risk that AI could reverse decades of progress in job quality.


Success from opacity


AI systems are increasingly being used to manipulate our behavior, and a major reason 

they're successful is because they're so opaque—we often don't know their true objectives or

how they're using our personal data. We see this in real-world cases like Target predicting 

pregnancies to send hidden ads, or Uber potentially charging more when your phone battery is

low. The problem isn't just theoretical; experiments have shown that AI can reliably learn 

our decision-making vulnerabilities and guide us toward specific choices, like making us 

more error-prone or steering financial decisions in its favor. The primary driver here is 

profit, where companies use AI to nudge us toward choices that benefit them, even if those 

choices aren't in our best interest and actually reduce our economic well-being.


To tackle this, we need a multi-layered solution. First, we must demand greater transparency

, forcing companies to be clearer about how their AIs work and use our data. However, 

transparency alone isn't enough. The second step is to ensure this transparency is enforced 

through human oversight and strong accountability frameworks, giving regulators the tools to

 investigate and punish wrongdoing. Third, we need to establish clear rules that explicitly 

prohibit AI systems from using these secret manipulative strategies that cause economic harm.

The challenge is that it's often incredibly difficult to distinguish clever, legitimate 

recommendation engines from manipulative ones, as seen in the decade it took to build the 

case against Google Shopping.


Finally, because detection is so hard, the fourth crucial step is to boost public awareness,

educating people from a young age about these risks to build societal resilience. The tricky

 part is that current regulations, like the EU's new AI Act, are insufficient because they 

focus on preventing physical or psychological harm, but largely ignore the economic harm 

that is at the heart of most AI manipulation. While AI holds incredible promise for society,

 we urgently need a smarter regulatory framework that protects our autonomy and economic 

interests without stifling innovation, ensuring we can safely reap the full benefits of the 

AI revolution.



 

Comments

Popular posts from this blog

These 6 DEADLY Myths about SMARTPHONES might BLOW YOUR MIND

If You're a Telegram User, Then You Might Be in DANGER!