April 04, 2026 09:21 am (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
AAP drops Raghav Chadha from key parliamentary role, sparks buzz over internal rift | Amit Shah to camp in West Bengal for 15 days during Assembly polls; predicts Mamata’s defeat in state and Bhabanipur | 'BJP plotting President’s Rule, don’t fall in the trap': Mamata Banerjee on Malda unrest, urges peace | 'Most polarised state': CJI Kant raps Bengal govt over 9-hour hostage of judicial officers | Bengal SIR protest: Judge pleads for help amid mob attack after 9-hour hostage ordeal | Bengal SIR progress: 47 lakh of 60 lakh adjudicated cases disposed of, Supreme Court informed | Amit Shah to join Suvendu Adhikari on Bhabanipur nomination day; BJP plans mega roadshow | Fuel prices rise: Premium petrol, diesel hiked amid oil price surge | Commercial LPG up Rs 195.50 as global oil prices rise; domestic rates unchanged | Layoff alert: Oracle cuts 30,000 jobs globally, 12,000 hit in India
Microsoft
Microsoft makes Mona Lisa to rap with AI technology.Photo Courtesy: X page video grab

Mona Lisa is rapping in a new viral video, check out how Microsoft made it possible with AI

| @indiablooms | Apr 21, 2024, at 07:35 pm

The iconic Mona Lisa is no longer only smiling, she also prefers to sing and even rap, thanks to the new artificial intelligence technology unveiled by Microsoft.

Last week, Microsoft researchers detailed a new AI model they’ve developed that can take a still image of a face and an audio clip of someone speaking and automatically create a realistic looking video of that person speaking, reported CNN.

The video can leave people stunned as it is complete with lip-syncing and natural face and head movements.
In one demo video, researchers showed how they animated the Mona Lisa to recite a comedic rap by actor Anne Hathaway, the American news channel reported.

Speaking about outputs from AI model named VASA-1, Micorsoft said: "We introduce VASA, a framework for generating lifelike talking faces of virtual characters with appealing visual affective skills (VAS), given a single static image and a speech audio clip. Our premiere model, VASA-1, is capable of not only producing lip movements that are exquisitely synchronised with the audio, but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness."

"The core innovations include a holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos. Through extensive experiments including evaluation on a set of new metrics, we show that our method significantly outperforms previous methods along various dimensions comprehensively. Our method not only delivers high video quality with realistic facial and head dynamics but also supports the online generation of 512x512 videos at up to 40 FPS with negligible starting latency. It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviours" the website said.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.