Site icon Studio Humanzee

Hello Caller, AI Is Listening

fraser
Is AI Freeing Us or Trapping Us

Part of our series Severed Conscience examines the use of AI to monitor our sentiments, as well as entrain responses into our psyche.

the mighty humanzee
By The Mighty Humanzee
fraser
Technology, while a great equalizer and an enabler, can mask issues that we avoid addressing by reducing pain and effort.  The same technology that got us through COVID is causing us to avoid issues of erosion of relationships, to ignore skyrocketing cases of depression and suicide in children.
Hello Caller, AI Is Listening

There are many common sense considerations that are lost in the hype of AI as the savior of all things.  Some of this is due to numerous terms that are used interchangeably which foster confusion, and as with much of the tech world, there are many proponents who are under the impression that if they can repeat a phrase somewhat correctly and in context, they actually understand the technology and its impact.  We are going to see two terms, AI and Natural Language Processing  and for our discussion they will mean the same thing.  While AI is the overall term for computer analysis and decision making in the place of a human, the ability to understand written text and then responding with phrases that match the context and meaning of a conversation is Natural Language Processing (NLP).  This is a vastly different technology than face recognition, also a form of AI.  For our focus here AI and NLP mean the same thing, and we’ll use AI for simplicity’s sake..  For further details, you can reference this article The Difference Between Artificial Intelligence & Machine Learning

By chatting with an AI app on your phone, you will be guided out of your rough waters.  The AI capabilities have advanced to a point where the responses have become eerily human, and in fact AI has been “clever” enough to fool a support center into creating an account.  In the mental health community there are proponents who advocate using AI to perform preliminary diagnosis when patient load is too high for mental health professionals.  This raises serious ethical issues, even if the process is monitored for quality much like any call center.  With people suffering with mental health issues and reaching for help, any mistake carried out by AI with the efficiency of computers can be catastrophic.  Among those considerations is what of the patient’s emotional response to AI, even if they are aware?  Efficiency is an important factor for the professional caregiver, but of the patient’s needs? What is the perception of that patient who feels the need for human contact, but the path they are put on doesn’t warrant interacting with a person to diagnose their needs to begin with?

Many studies are cited by proponents of AI’s use for mental health apps.  In particular the study Bonding With Bot: User Feedback on a Chatbot for Social Isolation measured how users felt when chatting with AI about their mental states.  The measure, called Net promotion, is borrowed from marketing.  While it sounds “sciency”, in the end the measure is merely the short term gratification factor “did the person feel “good” about using the app?”  The prognosis of the study:  because a majority of those surveyed felt good after interacting with the AI chatbot, use of the app must be good for them.

For example, a 2021 study of about 800 people experiencing self-reported social isolation and loneliness had them interact via text messages with a chatbot. The study found most users reported they were satisfied and would recommend the chatbot to a friend or colleague.

This is staggering.  A child feels good because they climb a tree 20 feet above the ground, or a drug addict feels good when they get a hit.  These are short term feelings, but do not chart a course relative to objective long term goals.  Setting the criteria for success as to whether the subject felt good about that interaction is short sighted.  What if these interactions alleviated someone’s tension temporarily, but a few hours later they hit their depression once again?  And what if using the chat app again sufficiently satisfied the user’s short term anxiety yet created habitual use of the app to the extent that they interacted less with people?  The study overwhelmingly states that those surveyed thought the interaction with AI chat was a good experience.

Isolation could be strongly reinforced by not disengaging from technology.  Embracing it further because short term fixes for fear, anxiety, boredom, and insecurities are more readily available than conversing with someone on social media, let alone seeking interpersonal contact, can lead to overuse.  Numerous studies relating to the effect of blue light from mobile and laptop devices on sleep cycles indicate that time in front of screens is not healthy.  Reduction in cognitive function and concentration, memory loss, and irritability are conditions that contribute to depression.  A cycle of hitting highs and lows due to dopamine responses to a cheerful conversation, a funny joke or a new piece of interesting information clearly forms a pattern of addiction. 

Hello Caller, AI Is Judging

The big question is, how can AI deliver the types of results that are reflected in the studies?  One thing that is irrefutable is that AI is amazingly good at identifying patterns in large volumes of data.  Many times these patterns will coincide with diagnosis that a human medical or clinician performs.  AI is also good at identifying patterns that humans may miss in large volumes of datasets, and AI is capable of performing this at speeds higher than humans are capable of.

To respond in a responsive, human-like fashion AI must be able to determine the context of a conversation, and the source of those judgments are ultimately supplied by humans.  Finding patterns is one thing for diagnosing cancer cells based on markers found in blood samples, but determining the mental state of an individual and whether that is loneliness or depression is something else.  Yet mental health apps have been developed to deliver short term remedies for anxiety, such as guided breathing exercises.  Is that what is needed, or should someone learn to not hyperventilate after they receive push-back in a planning meeting?  This is a judgment call, you can argue that one approach is soothing, the other is harmful because it callously ignores the immediate needs of someone in a panic attack.  Who makes that call to place the advice into the AI’s vocabulary when guiding someone?

As you learn more about AI, you will often hear phrases like:”AI learns” or “AI is trained”.   This means an AI application is fed data at higher volumes and changes may be introduced when incorrect conclusions are formulated by the AI program.

One company claims to have had 500 million conversations with over 5 million people in 95 countries.  Wysa https://www.wysa.com/ is a partner with the World Economic Forum and recipient of the FDA Breakthrough Device Designation for AI-led Mental Health.

Wysa conducted a study with the data accumulated from AI conversations

Wysa has held over half a billion AI chat conversations with more than five million people about their mental health across 95 countries. The worrying trend we saw in employee mental health led us to conduct in-depth studies of employees in USA and UK, as well as Wysa’s user base, to understand why current models aren’t working.

This report documents one of the largest observational studies of its kind globally, and examines the data from over 150,000 conversations that 11,300 employees across 60 countries had with Wysa.

There is a bias here, and it is this:  Ai has found an issue, and the answer is more AI.  There is depression among workers, and in order to save money, save businesses, AI assisted conversations are needed.  

Where is the root cause analysis?  Is this an AI band aid for the isolation, loneliness created by ordering people to remain in their homes during COVID, or the stress of an economy so hobbled by pandemic restrictions that people fear for family members losing their jobs?  It doesn’t matter.  

What about the additional screen time spent due to using Zoom for communication where you once would meet face to face?  What about not actually seeing people?  It doesn’t matter, the AI will read your responses and you’ll be encouraged to be mindful, to think happy thoughts, practice some Zen principles instead of seeking counsel from someone who can pinpoint that you are right to be worried as society has shut down.  While your interactions may remain anonymous, they will be captured centrally, studied and recommendations will be made to your managers.  What if you are in a small department of 7 people, and now your manager can determine that you have the problem with depression?  It could be fairly easy with your team being so small.

Now consider that you have been flagged for depression, and this information needs to be supplied, by law, to authorities to make sure you are not an immediate threat to neighbors.  What would that criteria be, and would you be given due process?  What if AI is used because there are not enough professionals to assess your mental state but in order to be safe, your activities are now restricted?

Finally, should your employer or government be making suggestions on how you deal with your anguish, fear, love and joy?  Does this type of tracking and attempts at behavior modification allow you to remain in control of the equation that ultimately judges your worth as an individual?  

What if your values are determined to no longer match what society requires, will the AI be your friend then?

Exit mobile version