GitHub adds AI-powered bug detection to expand security coverage
GitHub is adopting AI-based scanning for its Code Security tool to expand vulnerability detections beyond the CodeQL static analysis and cover more languages and frameworks.

AI-Generated Summary
GitHub is introducing AI-powered vulnerability scanning to complement its existing CodeQL static analysis engine, expanding security coverage to additional languages and frameworks including Shell/Bash, Dockerfiles, Terraform, and PHP. The hybrid model integrates directly into pull request workflows and is expected to enter public preview in early Q2 2026. Internal testing showed strong results with over 170,000 findings processed and 80% positive developer feedback, while Copilot Autofix reduced average resolution time from 1.29 hours to 0.66 hours.
Affected Sectors
Frameworks
Aggregated from BleepingComputer
This article was automatically aggregated from an external source. Content may be summarized.
Full Analysis

GitHub is adopting AI-based scanning for its Code Security tool to expand vulnerability detections beyond the CodeQL static analysis and cover more languages and frameworks.
The developer collaboration platform says that the move is meant to uncover security issues "in areas that are difficult to support with traditional static analysis alone."
CodeQL will continue to provide deep semantic analysis for supported languages, while AI detections will provide broader coverage for Shell/Bash, Dockerfiles, Terraform, PHP, and other ecosystems.
The new hybrid model is expected to enter public preview in early Q2 2026, possibly as soon as next month.
Finding bugs before they bite
GitHub Code Security is a set of application security tools integrated directly into GitHub repositories and workflows.
It is available for free (with limitations) for all public repositories. However, paying users can access the full set of features for private/internal repositories as part of the GitHub Advanced Security (GHAS) add-on suite.
It offers code scanning for known vulnerabilities, dependency scanning to pinpoint vulnerable open-source libraries, secrets scanning to uncover leaked credentials on public assets, and provides security alerts with Copilot-powered remediation suggestions.
The security tools operate at the pull request level, with the platform selecting the appropriate tool (CodeQL or AI) for each case, so any issues are caught before merging the potentially problematic code.
If any issues, such as weak cryptography, misconfigurations, or insecure SQL, are detected, those are presented directly in the pull request.
GitHub’s internal testing showed that the system processed over 170,000 findings over 30 days, resulting in 80% positive developer feedback, and indicating that the flagged issues were valid.
These results showed “strong coverage” of the target ecosystems that had not been sufficiently scrutinized before.
GitHub also highlights the importance of Copilot Autofix, which suggests solutions for the problems detected through GitHub Code Security.
Stats from 2025 comprising over 460,000 security alerts handled by Autofix show that resolution was reached in 0.66 hours on average, compared to 1.29 hours when Autofix wasn’t used.
GitHub’s adoption of AI-powered vulnerability detection marks a broader shift where security is becoming AI-augmented and also natively embedded within the development workflow itself.
Red Report 2026: Why Ransomware Encryption Dropped 38%
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Originally published by BleepingComputer
Original Source
BleepingComputer