ScienceTalk

Singlish-speaking robots and other ways to make AI work for S'pore and beyond

Cultural bias in data powering AI needs to be tackled to ensure AI is contextualised

ST ILLUSTRATION : CEL GULAPA

When the haze hit Singapore's shores in October, we asked our mobile phones for the latest PSI (Pollutant Standards Index) readings, only for the devices to draw a blank.

Speak to a voice-controlled application and chances are, if you are Singaporean, you will be asked to repeat yourself or be given a completely irrelevant answer.

Sometimes, you might even catch yourself trying to mimic a Western accent.

Singapore is one of the few Asian countries that recognise English as a main language, and many of us have had our fair share of such experiences.

While this may seem trivial, it is a symptom of a big issue - a cultural bias in the enormous amount of data powering artificial intelligence (AI) today.

AI systems learn via sophisticated pattern recognition, mapping complex inputs to outputs. This has enabled a diverse range of disruptive applications, from machine translation to self-driving cars.

But training data is collected, annotated and labelled predominantly in the West. That is also where AI assistants are primarily developed, which means they function best in Western societies since they are trained on Western speakers.

While China has declared its goal to be a global leader in AI by 2030, many of its AI applications were developed as part of a national strategy to serve its citizens. As such, the usefulness of AI systems to other Asian communities, including Singaporeans, may be limited.

If AI's cultural bias is not remedied, it will not just be voice-controlled virtual assistants that we are missing out on in this part of the world.

This disparity could even extend to the use of AI in healthcare to improve medical diagnosis.

Just last month, Deputy Prime Minister Heng Swee Keat announced Singapore's new national AI strategy that maps out how the country will leverage AI to transform the local economy and society.

ST ILLUSTRATION : CEL GULAPA

Singapore can play a leading role in helping to adapt AI for Asia by focusing on tweaking existing systems for multicultural Singaporean use, and developing human-centric systems that better understand the cultural nuances, needs and preferences of the local community.

We should not attempt to play catch up but play to our strengths and take on a very targeted approach to AI research and development.

For AI to benefit us, it first needs to understand us - not only to understand common sense but also the nuances of cultures and values.

What is acceptable to one culture might be intolerable to another, just as how people express themselves in one language differently from another.

For AI to draw out and understand these subtle differences, it first needs the data to learn from.

LARGE VARIETY

Singapore's edge is in our multi-racial and multicultural society, which allows researchers to collect a large variety of data sets and develop algorithms that are adaptable to various communities, including the Chinese, Indians and Malays.

A good example is Singapore's National Speech Corpus - first announced in late 2017.

It is a database of audio files with locally accented words that can be used to train voice recognition software to better understand Singaporean accents. This will allow global technology providers to deliver better speech-related applications in Singapore.

Expand data collection to the entire South-east Asian region and you would reach a sizeable number of Asian languages and cultures.

The potential for AI in Asean is immense. For instance, with the personalisation of human-machine interface - be it ethnic, cultural or environmental - the use of AI chatbots can allow call centres to become fully automated.

To ensure that AI is contextualised for Singapore, we have established strong collaborations across the institutes of higher learning, public sector research bodies such as the Agency for Science, Technology and Research (A*Star) and the industry.

Take the example of an advanced speech recognition system developed by the Singapore Civil Defence Force, together with four other government agencies.

Using AI and deep learning, the system is able to transcribe and log each distress call received in real time - even if it is in Singlish or dialect.

It was developed in an AI Speech Lab, led by experts in speech, text and natural language processing from the National University of Singapore (NUS) and the Nanyang Technological University (NTU).

Another similar project by the State Courts, developed together with A*Star's Institute for Infocomm Research (I2R), is an AI-powered transcribing system that is able to transcribe English oral evidence in court hearings in real time, allowing judges and parties to review oral testimonies in court immediately.

At present, transcripts of court proceedings - done manually through audio recordings - are typically completed within seven days, or within three days if urgent.

Researchers from I2R have also developed human language technology capabilities to automatically understand and elicit important clinical information from spoken nurse-patient conversations.

Such capabilities could potentially enable healthcare providers to better manage patients with chronic diseases among the Asian populations.

To silence critics who have dismissed Singapore's ability to be a front runner in AI due to the limited data we have, we are also diversifying our research and leap-frogging the traditional machine-learning methods that rely heavily on data.

Instead of simply depending on statistical models, we are now moving on to the third wave of AI, where the emphasis is on adapting to context. With adaptive reasoning, computer algorithms will, for instance, be able to discern the use of "principal" from "principle" by analysing the surrounding words of the sentence.

In this wave, AI systems do not only have to arrive at an appropriate answer. They need to be able to explain the thinking process behind it.

I2R has started the Learning With Less Data project, with the goal of developing novel AI algorithms that can learn with 10 to 100 times less data. The approach builds on traditional paradigms to address data scarcity, such as methods that incorporate external knowledge, like facts about the world or physical constraints.

AMBITIOUS PROJECT

Looking ahead, local scientists will also be investing more time and manpower into Asian phenotype-based research that focuses on multi-modal understanding of human emotion and learning - not just from text, but also speech and images.

One example of this is a large, ambitious project on human-robot collaboration, started last year and led by A*Star together with NUS, NTU and the Singapore University of Technology and Design. The project aims to enable robots to understand and work seamlessly with humans.

These robots will be trained to understand dialogue and gestures, using data collected from local volunteers, and will be attuned to Asian-specific nuances. Ideally, they would learn to pick up on local emotional expressions in humans, such as "aiyah" for frustration or impatience, and adjust their responses accordingly.

To ensure it stays on top of the game, Singapore is already steadily growing its AI talent pool through the AI Singapore Talent Portal as well as other active recruitment initiatives.

Let's hope that when we use "lah" when speaking to our devices, we will one day be completely understood.


• Professor Tan Sze Wee is assistant chief executive of A*Star's Science and Engineering Research Council, while Professor Ong Yew Soon is chief artificial intelligence scientist at A*Star and the president's chair professor of computer science at Nanyang Technological University.

Join ST's WhatsApp Channel and get the latest news and must-reads.

A version of this article appeared in the print edition of The Straits Times on December 14, 2019, with the headline Singlish-speaking robots and other ways to make AI work for S'pore and beyond. Subscribe