UN peacekeeping missions can deploy personnel into complex and unknown environments, where personnel are required to deal with a range of stressful and potentially volatile situations.
But simulation training that has been developed by the Australian Defence College, and is being refined in collaboration with the University of Newcastle, is helping to provide training for personnel that they may not otherwise be able to do.
This includes training for deployments into parts of Africa and the Middle East, where peacekeepers may need to deal with scenarios such as child soldiers.
In these types of scenarios, it’s not possible to carry out real-life role plays due to ethical considerations.
“It’s a unique area where it really brings together computer science, engineering, arts and psychology to develop realistic virtual training scenarios and systems for a combination of skill training areas,” said Karen Blackmore, senior lecturer of the School of Electrical Engineering and Computing at University of Newcastle.
Blackmore has been working with avatar-augmented role plays where an expert drives interactions with a trainee through a full-body avatar.
The simulations use a range of technologies – both markerless and marked motion capture – to combine streams of data in real time to drive the expressions and movements of an avatar.
One of the key challenges of the work is how to accurately convey emotions like anger and fear through an avatar to elicit realistic responses from a trainee.
“At the moment we’re using a head-mounted camera that points back towards a human face, and that camera effectively does facial recognition and pulls out points of reference from the human face and then matches those points of reference,” Blackmore said.
The technology is similar to what is being used in Hollywood studios on movies. But unlike Hollywood, which has big budgets and can carry out a substantial amount of post-processing, the ADC simulations are carried out in real time.
This adds the challenges of lags, glitches or avatars that move unnaturally, depending on the technologies and equipment.
There are also synchronisation challenges to allow actors to play avatars that are both male and female using voice morphing software, and how realistic an avatar should be.
“We took a measure of humanness and found, effectively, that male avatars get perceived as being less human than female avatars by both genders,” Blackmore said.
“Similarly, we had a male and a female avatar that were both created using exactly the same techniques, with huge differences of how human those avatars were perceived. Again, the female avatars were generally seen as more human than male avatars.”
Simulation training is not only being used to train military personnel, but is also being used in other applications. For example, health scenarios in patient-facing areas to train university graduates in situations where they may have to deal with patients in difficult scenarios, such as someone demanding a particular kind of drug.
The ADC also provides language training for personnel who deploy to regions where English is not the first language, with plans to use simulation technologies for training in this area.
For example, defence units that are deployed to neighbouring countries require a base level of language skills to communicate with locals and develop trust.
In this situation, aspects other than just the spoken word, such as intonation and facial expressions, can also be important to learn.
“If you’re using language to interact with perhaps hostile villages, then there’s a set of communication skills that extend beyond spoken word that need to be conveyed and need to be able to be interpreted by the trainee,” Blackmore said.
“This is effectively a tool that will eventually allow the trainee, particularly within our defence personnel, to learn a language that’s not removed from that broader set of communication skills that accompany the spoken word.”
Blackmore said the team would eventually like to have artificial intelligence (AI) perform some of the roles instead of experts.
But she said it’s not yet possible to achieve that kind of rich, dynamic response in all situations. Getting to that point would involve using a combination of facial recognition and voice recognition to effectively take what a human being is saying and parse that language into an AI tool.
The team could then map it to an associated set of responses or potential responses that the AI tool would put together.
“It’s certainly flagged as something that we would like to do in the future,” Blackmore said.
“As we develop it and as we deploy it, there will be interesting challenges that we haven’t foreseen.”