Urban planners in Vienna, Austria, installed their first smart traffic lights specifically designed to increase pedestrian safety in 2018. After years of analysis and improvement, the Graz University of Technology (TU Graz) researchers have now rolled out a second generation of exponentially more complex, deep learning-based software to 21 lights at four crosswalks. Unlike its predecessor, however, the new system is programmed to provide greater help to pedestrians with walking aids, wheelchairs, and even baby strollers.
People with disabilities are disproportionately at risk when crossing busy streets. Pedestrians using wheelchairs, for example, are 36 percent more likely to die in a car-related accident when compared to victims struck while standing. This is often due to a combination of factors, including potentially reduced visibility for drivers and longer crossing times for pedestrians in wheelchairs. While smart traffic light cameras detect the majority of pedestrians, they often have difficulty doing so for commuters with limited mobility. In the US, for example, researchers are working on specialized apps for people with disabilities to help navigate routes and coordinate with traffic cameras.
According to a TU Graz profile published on November 28th, upgraded smart crosswalk lights may vastly resolve these previous limitations without the need for apps. This is thanks to thousands of times more computing power than the initial programming. Floating point operations per second, or flops, measure a system’s number of possible computations, often when involving large, dynamic ranges. Teraflops—one trillion floating point operations per second—are most frequently seen in high-performance graphics cards or supercomputers. In 2018, lights analyzed their surroundings with 0.5 teraflops of power, but the upgraded technology now harnesses anywhere between 100 and 300 teraflops for calculations.
“This allows us to use a more complex and thus, more capable machine learning model, which means that people can be detected more accurately and robustly,” project manager Horst Possegger said in a statement. “People with mobility impairments usually need longer to cross the road. Our traffic light system is able to recognize such needs very reliably so that the green phase can be extended as required.”
[Related: What can ‘smart intersections’ do for a city? Chattanooga aims to find out.]
To build the latest pedestrian analysis software, programmers amassed an image dataset of street scenarios involving various numbers of people, configurations, as well as different walking accessories. Instead of gathering photos from unsuspecting or anonymized strangers, however, researchers recruited volunteers to stage scenes at TU Graz’s Inffeldgasse campus in order to honor people’s privacy. The resulting deep learning model can predict when a person wants to cross a road with 99 percent accuracy, while mobility restrictions are detected with at least an 85 percent accuracy rate. Even when classification errors occur, a traffic light’s green phase is still requested at least with its standard time duration.
Every camera assesses a roughly 323 square foot waiting area around each smart traffic light to which pedestrians want to cross the street at any given time. Privacy is a major concern for the designers here, as well. In every instance, cameras process and delete real-time image data in less than 50 milliseconds. The only information capable of being stored for later use is the number of pedestrians, as well as their potential mobility restriction classifications. System designers hope that this anonymous statistical data could soon help urban planners better coordinate traffic light systems, or even eventually redesign entire smart light schedules.