Dr Michael Mosley deepfaked for vile online health scams after death

Dr Michael Mosley compilation picture
Dr Michael Mosley has appeared in deepfaked videos promoting health scams (Picture: ITV)

The late Dr Michael Mosley is among medical professionals who have been deepfaked in social media videos promoting health scams, an investigation has revealed.

TV doctors including Dr Hilary Jones and Rangan Chatterjee are also caught up in the worrying trend alongside the late broadcaster, who died at the age of 67 after he went missing in Greece last month.

A deepfaked video, which was circulated online after Dr Mosley’s death, poses as the late broadcaster talking about a product that ‘normalises’ blood sugar levels – the fake video then tells diabetes sufferers to ‘forget about insulin and other medication’ in what appears to be a vile scam.

None of those who appear in the videos – which are emerging on Facebook – endorse the counterfeit products the deepfaked videos are promoting.

‘Some of the products that are currently being promoted using my name include those that claim to fix blood pressure and diabetes, along with hemp gummies with names like Via Hemp Gummies, Bouncy Nutrition, and Eco Health,’ Jones said in the BMJ report.

A fake Dr Jones appeared in one fabricated video, which recreated him on ITV show Lorraine – where he often appears in real life.

While deepfakes have been around for some years, the technology is growing more sophisticated and the videos more convincing, with less audio lags and obvious video errors.

Up Next

Dr Michael Mosley
The TV star and journalist died while on holiday in Greece last month (Picture: Brook Mitchell/Getty Images)

Dr Jones explained how the videos were difficult to control, because ‘even if they’re taken down, they just pop up the next day under a different name.’

John Cormack, a retired doctor who worked with The BMJ on the investigation, said: ‘The bottom line is, it’s much cheaper to spend your cash on making videos than it is on doing research and coming up with new products and getting them to market in the conventional way.’

A Meta spokesperson told The BMJ: ‘We will be investigating the examples highlighted by The BMJ. We don’t permit content that intentionally deceives or seeks to defraud others, and we’re constantly working to improve detection and enforcement.

‘We encourage anyone who sees content that might violate our policies to report it so we can investigate and act.’

The BMJ recommend if you find a deepfake to contact the person endorsing the product to see if it’s legitimate, leave a comment questioning its authenticity, and report it to whichever platform you find it on.

This isn’t the first time a celebrity has been used in such videos against their will to promote products they have no relation to, and definitely don’t endorse.

Dr Hilary Jones
Dr Hilary Jones was also deepfaked as part of a worrying trend (Picture: Ken McKay/ITV/Shutterstock)

Taylor Swift’s likeness was used to flog what looked to be Le Creuset cookware in an advert, which turned out to be a deepfake and was reported to be a scam.

In a less convincing scam, Tom Hanksseemingly flogged a dental plan, before he took to social media and informed fans this was an AI deepfake and most definitely not him.

Disturbingly, in March it was revealed that over 250 British celebrities are among thousands who have been victims of deepfaked porn.

One of these was Channel 4 News presenter Cathy Newman, who said: ‘It feels like a violation. It just feels really sinister that someone out there who’s put this together, I can’t see them, and they can see this kind of imaginary version of me, this fake version of me.’

In January this year, pornographic images of a deepfaked – again – Taylor Swift went viral on X, leading to the social media site’s blanket ban on all searches for her name in an attempt to contain their inevitable spread. They were viewed millions of times.

How to spot a deepfake

Anti-malware software company Norton lay out how to spot a deepfaked video.

They advise to look out for…

  • – it’s hard to make AI-created eyes blink naturally
  • – deepfakes work by using an image of a real person and placing it onto another face or body, so look out for any odd facial movements and also for features looking out of place
  • Often deepfakes look emotionless
  • – is their head much bigger than their body, or vice versa? Deepfakes are usually focused on the face, so tend to be less sophisticated when it comes to bodies/the overall person
  • – are they unnaturally jerky, a little like a worn DVD?
  • see whether the colour of the video looks abnormal and ensure the shadows are in the right place
  • AI won’t generate frizzy or flyaway hair, so if it looks like a particularly good hair day then be wary
  • if the teeth don’t look completely separated, this could be AI – as it so far can’t totally distinguish one from the other yet
  • – is there poor lip-syncing or is their voice a little robotic? Are they seemingly outdoors but there is no background noise?
  • If the video’s contents claims ground-breaking health treatments and it’s not being reported by trustworthy news publications it could mean the video is fake

You can also use Google’s reverse image search tool, to see if an original image has been manipulated.

In January Zoe Ball took to the radio to alert fans to a scam, which used her face as an endorsement for a financial plan.

Zoe said: ‘A lot of people are calling it Apex AI I think and it’s making out that I’ve invested some money into this financial scheme and done quite well from it. 

‘And then it’s encouraging people who follow me to do the same. I’ve had so many people getting in touch asking if it’s a real thing. 

Meanwhile, much of Hollywood went on strike last year, with many famous faces publicly expressing their concern over AI taking acting and writing roles.

In April, Katy PerryBillie Eilish and Stevie Wonder were among 200 names from the music industry who signed an open letter warning about the threats AI poses to them.

Submitted by the Artist Rights Alliance non-profit, the letter said: ‘We must protect against the predatory use of AI to steal professional artists’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem.’

Related Posts


This will close in 0 seconds