Potential of AI to make web more trusted
A new international study commissioned by WP Engine and conducted by researchers at The University of London and Vanson Bourne explored the present and near future of artificial intelligence (AI)-driven human digital experiences on the web, and the often tenuous but also potentially rewarding relationship between consumers, brands and AI.
The study, which surveyed consumers and enterprise companies (1,000 employees or more) in the US, UK and Australia, found that in an era of purpose-driven consumption, values – such as transparency, trust and humanness – are key drivers that unlock value in AI.
According to IDC, worldwide spending on artificial intelligence (AI) systems is forecast to reach $35.8 billion in 2019, an increase of 44% over the amount spent in 2018. Much of that growth will come from the application of AI online because there is a natural, evolutionary symbiosis between AI and the internet.
However, it was a sudden burst of activity starting in 2013 that marks the beginning of what we might term the modern AI period, especially for digital and digital experiences, characterised predominantly by automated content creation, programmatic ad buying in 2014, and intelligent search. With these advances in AI, it became a method of leveraging technology to improve the customer journey and providing real impact to organisational bottom lines.
This study explores the possibility of AI on the web – suggesting how it will truly benefit the world when it is grounded in values rather than simply in topline business benefits.
The Values of AI
Despite the past year’s focus on General Data Protection Regulation (GDPR) and privacy regulations designed to give consumers power over their data, nearly half of UK consumers (48% according to a survey by CIM) still don’t know how brands are using their data. They remain concerned about the privacy of their personal information and online behaviours. As consumers demand that enterprises prioritise their data protection and become transparent regarding its use, collection and value, most enterprises have started having these necessary conversations regarding the role of ethics, data protection and consumer rights.
Personalisation systems tend to give way to trust and ethical concerns, both from a privacy perspective and in terms of cross-device efficiencies. In the UK, both consumers and enterprises indicated a high degree of importance regarding values issues, such as the protection of data privacy and security, the expectation of organisations being able to explain transparently what they are using data for, degree of personalisation, and a clear and direct value for the exchange of data, to name a few. Consumers overall placed more importance on these issues, putting enterprises on notice. Most important to UK consumers was that proper privacy protections are in place during personalisation (92.2% net agreed) and yet only 82% of enterprises indicated this was important, indicating a serious gap that needs to be bridged for enterprises to truly gain UK consumer trust.
Creepy or Cool?
An increasing number of digital users are now mindful and aware of the “value exchange” that occurs with a brand when they participate in a digital experience. Not surprisingly, the willingness to share personal information in exchange for a better service was highest among millenials with older generations less willing to trade their personal information for better, more personalised service.
Data sharing leads to personalised services and in this AI is extremely capable, such as being able to push an ice cream advertisement to you while walking past an ice cream shop on a hot day. Still, consumers worry that this not border into the obtrusive – 86.8% of UK consumers felt it was important that personalisation doesn’t feel ‘creepy.’ And in response, 77% of UK enterprises agreed that avoiding creepiness was crucial. Ultimately organisations need to walk the fine line where a person’s digital space must be respected in the same way as physical space.
Lessons for the AI Age
The research resulted in several lessons for brands and agencies using AI in the digital age:
Open your algorithms: Enterprises are increasingly using AI-driven platforms to make impactful decisions, such as the allocation of jobs, loans, or university admissions. Thus, there is a rising concern from users on how algorithms make these decisions, as 92.7% of UK consumers feel it is very important for organisations to be transparent about how their data is being used for creating personalised online experiences. Keeping this in mind, organisations have identified the need to incorporate the values of AI, primarily trust and transparency, in their strategies. 41.5% of UK IT decision-makers say that it is very important to be transparent about how they use AI to personalise user experience.
Beware bias: 81.5% of UK IT decision-makers agreed that it is important for organisations to interrogate bias in their organisations and the data sets they use. It is particularly relevant for supervised learning and machine learning data sets where the process of supervision allows brands to change the way they do things. This provides an opportunity to create diverse and inclusive teams to maintain various voices of change. 79.5% of IT decision-makers agree that it is important that the teams building and maintaining AI systems are diverse.
Use only what you need: With GDPR, the world began to see governments putting structure around what and how data may be used. The report shows that 86% of consumers do not want organisations tracking data that they don’t have any use for and 92.8% of UK consumers said that they expect organisations to explain what they are doing with their data.
Be customer inspired: Thanks to advancements in natural language processing and conversation AI, the capability now exists for chatbots and digital assistants to closely mimic their human counterparts. In fact, 56.4% of consumers surveyed indicated it’s important that websites have a chatbot or digital assistant to help with customer service, and 82% of enterprises are using AI in this way. However, 85% of consumers surveyed agreed that it should be made known when AI is used in chatbots and similar customer-facing applications. 85% of UK consumers strongly agreed that companies have a responsibility to disclose the use of AI in chatbots and similar customer service interactions. And 77% of IT decision-makers also agreed that when it comes to deploying customer service chatbots, it should be made known to users that such a service is not facilitated by human agents, demonstrating alignment between enterprises and consumers about the importance of this issue.