Academically Validated. Peer-Reviewed. Built for Truth.
BiasBreak is not just another AI tool it is built on a foundation of rigorous academic research. The technology powering BiasBreak was developed by Arif Wali, founder of BiasBreak and a Computer Science graduate of Middlesex University London, in collaboration with two independent academics. The research was formally peer-reviewed and published in an internationally recognized scientific journal, giving BiasBreak a level of scientific credibility that most AI tools simply do not have.
Published Research
Title: Towards a News Authenticity Predictor (NAP AI)
Authors: Arif Wali, Stelios Kapetanakis, Giacomo Nalli Published in: Engineering Proceedings, MDPI — 2026 DOI: 10.3390/engproc2026124089
Presented at: 6th International Electronic Conference on Applied Sciences, December 2025
This paper introduces the core AI system behind BiasBreak — a News Authenticity Predictor that uses Large Language Models and Natural Language Processing to detect misinformation, bias, and unverified claims in online content.
What the Research Found
The model was rigorously tested on a dataset of 1,118 real and fake news articles. Here is what the results showed:
98.03% — Overall Accuracy 98.15% — Precision (fake news detection) 98.15% — Recall 98.15% — F1-Score
Out of 1,118 articles tested, only 22 were misclassified — and those errors were evenly balanced between false positives and false negatives, meaning the model does not unfairly lean toward labeling content as fake or real.
How the AI Works
The research behind BiasBreak combines two powerful approaches:
1. Fine-Tuned BERT Model BiasBreak uses BERT (Bidirectional Encoder Representations from Transformers), a state-of-the-art language model developed by Google. It was fine-tuned on a diverse dataset including authentic news from outlets like BBC, Reuters, and The Guardian, as well as fake news articles verified by Snopes and FactCheck.org. BERT’s bidirectional architecture allows it to understand the full context of words — not just their surface meaning — making it exceptionally effective at spotting subtle linguistic patterns common in misinformation.
2. Real-Time External Verification For claims requiring external validation, BiasBreak uses a retrieval-augmentation mechanism — querying live search results and cross-referencing claims against reputable sources in real time. This two-step process combines deep language understanding with live fact-checking for a more reliable and dynamic result.
What We Trained On
The model was trained on a carefully curated dataset combining multiple sources:
- Authentic news from BBC, Reuters, and The Guardian
- Fake news flagged by Snopes and FactCheck.org
- Strongly biased and partisan content
- Clickbait headlines
- Factual claims with and without credible references
- Established datasets including LIAR and FakeNewsNet
Authors & Academic Affiliations
This research was independently conducted and co-authored by:
Arif Wali — Founder of BiasBreak and Information Technology (IT) graduate, Middlesex University London
Stelios Kapetanakis — Independent academic affiliated with Middlesex University London
Giacomo Nalli — Independent academic affiliated with Middlesex University London and Distributed Analytics Solutions, London
The university affiliation reflects the individual authors’ academic associations only and does not imply institutional endorsement or sponsorship of BiasBreak by Middlesex University London.
Experience the Research in Action
The same AI validated in our peer-reviewed study powers every analysis you run on BiasBreak. Try it yourself — paste any article, URL, or text and see the science at work.