AI Fake Medical Videos Are Surging—How to Detect Them, According to Experts | FRIDAY DIGITAL

AI Fake Medical Videos Are Surging—How to Detect Them, According to Experts

  • Share on Twitter
  • Share on LINE
A fake doctor created by the author using AI. It was made in no time with simple instructions.

AI doctors spreading false medical information

On October 28, an international conference on AI crime countermeasures was held in Tokyo. Police and experts from various countries sounded the alarm, saying, “AI crime is a cross-border threat.”

Supporting these concerns is the rapid rise of AI-generated fake doctors. On YouTube and TikTok, countless fake medical videos feature AI-generated doctors confidently dispensing health advice. Many look so real that viewers cannot tell they are fake.

“Nowadays you can generate scripts, voice, and video entirely with generative AI apps. Anyone can mass-produce real-looking articles and videos in a short time. AI-generated articles and AI videos are becoming more common than human-made ones.”

So says medical AI researcher Karisu.

Even in filmmaking, it is no longer unusual for scripts, backgrounds, and even performers to be created by AI, making them indistinguishable from reality. The sophistication of fake videos is evolving daily.

“Well-crafted AI videos cannot be distinguished by the human eye. Not only short clips—20- to 30-minute long videos can be generated easily. So the number of AI fake creators seeking ad revenue is only increasing.” (Quotes below are from Karisu.)

Although pretending to be a doctor without a license violates medical law, on social media anyone can simply say “I am a doctor.” White coats and a calm tone of voice—people instinctively trust these appearances. Karisu issues another warning:

“Even real doctors often say questionable things on social media for self-promotion. Fake doctors are even worse. Yet many people believe them just because a doctor said so, leading them to try incorrect health methods or dangerous supplements. This tendency is especially strong in Japan, which has the world’s largest elderly population.”

In reality, many videos feature supposed doctors (actually AI-generated fakes) claiming things like “Drink this and your cancer will disappear” or “This method cures diabetes,” clearly targeting and exploiting the elderly.

“Platforms like YouTube prioritize videos that quickly trigger dopamine. Sensational content outranks scientific accuracy, and extreme claims tend to spread more widely. In fact, an MIT study found that false information spreads about six times faster than accurate information. And honestly, if you post something correct like ‘Quit smoking and drinking, get enough sleep, and exercise daily,’ nobody watches it.”

The problem is that sensational lies spread far more easily than useful, accurate information.

Facial recognition system in operation at Umeda Station in Osaka.

AI Is Evolving Medicine

Fake experts spreading misinformation—this problem is not unique to Japan. Overseas as well, AI-based fraud and fake specialists are increasing.

“Cybercrime often uses overseas servers that pass through multiple countries, making identification extremely difficult. Moreover, Japan’s regulations don’t apply in many cases. And with so many offenders, police with limited budgets and staff must prioritize more serious crimes.”

Japan has only just begun regulating AI. Fraud and false claims are already illegal under existing laws, but enforcement remains difficult. Although AI is misused for fake videos and scams, it is also indispensable in everyday life—facial recognition on smartphones, automatic photo enhancement, route optimization in map apps, etc.

AI is starting to show enormous power in the medical field. Many medical AIs have already been approved. Even a short list looks like this:

・CXR-AID (AI detecting nodules, infiltrates, pneumothorax on chest X-ray)
・MGCAD-i (AI detecting lesions on mammography)
・InferRead CT Pneumonia (AI detecting COVID-19 pneumonia on chest CT)
・EIRL Brain Aneurysm (AI detecting unruptured aneurysms on head MRA)
・gastroAI model-G2 (AI detecting early gastric cancer/adenoma in endoscopy)
・RA-100 (AI assessing findings on fundus photographs)

When scammers use AI in medicine, all they can do is spread fake medical information. But when specialists and AI engineers collaborate, medical AI and drug-development AI can enable accurate and early diagnosis, reduction of physician shortages, lower medical costs, and longer healthy lifespan. These AIs can detect/classify cancers and other lesions from X-rays, CT, MRI, pathology images, and can even predict prognosis. It’s all about how the tool is used.

Some say that as medical AI advances, doctors will become unnecessary, but Karisu rejects this idea.

“In the AI era, the role of knowledge workers is shifting from thinking to taking responsibility and bringing people together. These are uniquely human roles that AI cannot perform. Even if AI performs diagnosis, the final approval must come from a physician.

Doctors also guide patients, explain clearly, and help them make the best decisions. AI expands knowledge through big data, and humans take responsibility and lead others. This collaboration is the ideal relationship for the AI era. The same applies to lawyers, engineers, researchers, and other professionals.

Unfortunately, Japan is 5–10 years behind in medical AI/drug-development AI. Only about 40 AI medical devices are approved domestically—about 1/30 of the U.S. and 1/10 of South Korea. Despite being the world’s most aged society with universal healthcare, Japan has vast amounts of high-quality medical imaging data.

But vague fears about data leaks and anonymization have hindered data sharing, leading to this delay. That’s why I founded Karisto Inc., which provides fully anonymized, curated, standardized, and annotated medical imaging datasets for medical-AI/drug-AI development. We already supply many medical-AI companies, device makers, pharmaceutical firms, and academia worldwide.”

AI is both convenient and dangerous. So how can we avoid being fooled by fake videos and survive in the AI era?

“Always ask yourself, ‘Why is that so?’ and ‘What does it mean?’ Interpret things in your own words, and investigate whenever something feels off. AI does not output ‘the correct answer’; it produces the most plausible answer based on prompts.

Understanding this nature and using AI as a tool—that is AI literacy. Also, don’t rely on just ChatGPT—cross-check with Gemini and other AIs. Never accept AI outputs at face value; verify them through multiple perspectives to reach more reliable understanding.”

AI should not be a threat to humanity, but a tool for turning human intention into reality.

Karis — Profile

A top-tier Japanese medical AI researcher (primary-authored papers cited approx. 2,000 times). Passed the University of Tokyo entrance exam at age 16. Holds a Ph.D. in Information Science and Technology. For over 10 years, he has been conducting research in Japan and abroad to resolve the shortage of medical imaging data. To fundamentally solve this issue, he founded Karisto Inc., which distributes Japan’s diverse and high-quality medical imaging datasets. He also serves as Visiting Associate Professor at Osaka University, Specially Appointed Associate Professor at Nagasaki University, and Board Member of the Japan Digital Pathology Research Society.z

  • PHOTO Photo Library (Face Recognition)

Photo Gallery3 total

Photo Selection

Check out the best photos for you.

Related Articles