By Gov1 Staff
SAN DIEGO, Calif. — Artificial intelligence (AI) is increasingly assisting first responders in processing information quickly to improve emergency response times, although concerns about risks and implementation challenges persist. The Department of Homeland Security’s Science and Technology Directorate (S&T) is driving a research initiative to balance the benefits and limitations of AI, seeking feedback from responders and exploring AI-supported tools for situational awareness and decision support.
In collaboration with the University of California San Diego’s WIFIRE Edge program, S&T has enhanced wildfire data collection to support emergency responses in rapidly shifting fire conditions. S&T has also partnered with Homeland Security Investigations (HSI) to deploy AI tools that contributed to a 50% increase in fentanyl seizures and aided in Operation Renewed Hope, rescuing 311 victims of sexual exploitation in 2023.
Key AI applications identified by the First Responder Resource Group (FRRG) include patient triage in mass casualty events, real-time language translation, and managing emergency call overload by parsing similar incident reports. Paul McDonagh, First Responder Portfolio Manager at S&T, emphasized that responders favor human-in-the-loop systems: “Our responders are pretty confident that they do not want to turn it all over to AI yet,” he noted.
Feedback from FRRG members revealed AI’s potential in documentation. “If we could leverage AI to complete like 90% of the documentation... it would be a huge improvement,” said Brady Robinette of Lubbock Fire Rescue. Similar input informed S&T’s AI Landscape Assessment and exercises, helping identify gaps in Emergency Operations Centers’ capabilities and shaping future development for enhanced response capabilities.
Alongside AI-driven projects like predictive call-handling software, S&T has tested AI tools for object detection and weapon recognition through New York partnerships. Brian Henz, senior science advisor at S&T, noted that these tests gather essential responder feedback before tools go live: “The AI solution is being developed by AI companies, but before it becomes public-facing... we need to understand, ‘How do we test that system to the best of our ability?’”
However, AI’s risks include privacy, deepfake misuse, and potential abuse, such as swatting. “They all want it, but... they want it done correctly,” McDonagh explained. As S&T continues to assess cost-benefit aspects and governance, it remains committed to delivering practical, safe solutions that respect the complexities of emergency response work.
Gov1 is using generative AI to create some content that is edited and fact-checked by our editors.