The bias in AI doesn’t end with its applications to the retail and POS space – in fact, it becomes an even more pressing issue.

The AI used in retail spaces and the POS market – especially in its newer forms – relies heavily on machine learning, sensor fusion, and computer vision. Like in Amazon’s “Just Walk Out” market, this technology tracks consumer movement and behavior and enables an entirely automated shopping experience. 

Customers shop as they want, picking up and putting down items as they wish, expecting to be charged an accurate total. They converse with self-ordering kiosks, expecting to be understood and responded to accordingly.

However, that is not always the case. These algorithms and machine learning processes share many of the same problems as the AI space. The algorithms that power these stores and technologies are informed by much of the same biased data that causes racialized facial recognition, predictive policing, and computer vision. 

With the added stakes of the theft-monitoring technology that many of these new-age technologies employ, Black and other marginalized peoples are put at risk of further discrimination. Anti-theft detection sensors and technologies can falsely and disproportionately identify BIPOC as shoplifters. 

BIPOC or anyone with an accent is at risk of not being able to use conversational ordering systems because they weren’t trained using a diverse sample set. For all these reasons and more, the technologies and algorithms we are beginning to apply throughout the retail and POS space are at risk of further marginalizing already marginalized communities, as we have seen happen time and time again due to biased data and researchers.

Necessary Steps Going Forward

 

The Future of AI in Stores. White humanoid woman on blurred background using digital artificial intelligence icon hologram 3D rendering

In going about addressing these issues, it is important to remember that bias is not an inherent quality of the data these machines learn with. This reductionist view avoids centering the real issue: societal discrimination. The algorithms are constructed, the data is collected, and humans build the data collection algorithms. The machine only learns their biases and amplifies and exercises them.

“It is inevitable that values are encoded into algorithms,” Arvind Narayanan, a computer scientist at Princeton, told Vox.

“Right now, technologists and business leaders are making those decisions without much accountability.”

This lack of accountability comes from the lack of laws that exist surrounding AI. Recently, New York became of the first states to ban employers “from using automated employment decision tools to screen job candidates unless the technology has been subject to a “bias audit” conducted a year before the use of the tool,” Bloomberg reported

Some of the states in the country- California, Nevada, Massachusetts Missouri, Nevada, and New Jersey have already introduced laws related to AI. This list is expected to grow as we see stakeholders become more aware of the issues concerning AI.

Starting to remedy the racism, misogyny, and discrimination present in AI and machine learning research is a complex process, but it begins with examining and undoing the biases present in society and fully understanding how they have permeated every aspect of our technology. With this understanding, current algorithms and data sets must be closely and continuously audited for biased outcomes. 

“This is not just a technical problem. This is a problem that involves the social sciences,” Kai-Wei Chang, an associate professor at the UCLA Samueli School of Engineering who studies artificial intelligence told NBC

There will be a future in which systems better guard against certain biased notions, but as long as society has biases, AI will reflect that Chang added.

Data samples should be manually screened and ensured to be diverse and representative. Data scientists need to be cognizant of existing biases and problematic datasets and use this knowledge to inform data collection and refinement. 

Some algorithms – like those that predict sexual orientation – that can be easily used for discrimination should not even be constructed. Research and development teams should be diverse and inclusive. Outcomes of algorithms should continually be monitored and cross-checked. 

While AI technology may seem at first to improve day-to-day operations in businesses, it is important to recognize the biased characteristics of this technology and its implications.